Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.
Padula, William V; McQueen, Robert Brett; Pronovost, Peter J
2017-11-01
The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.
2009-09-01
nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less
2007-10-01
Architecture ................................................................................ 14 Figure 2. Eclipse Java Model...16 Figure 3. Eclipse Java Model at the Source Code Level...24 Figure 9. Java Source Code
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments
ERIC Educational Resources Information Center
Kermek, Dragutin; Novak, Matija
2016-01-01
In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…
ERIC Educational Resources Information Center
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique
NASA Technical Reports Server (NTRS)
Tiampo, Kristy F.
1999-01-01
In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.
The HYPE Open Source Community
NASA Astrophysics Data System (ADS)
Strömbäck, L.; Pers, C.; Isberg, K.; Nyström, K.; Arheimer, B.
2013-12-01
The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model. It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. HYPE has been successfully used in many hydrological applications at SMHI. For Europe, we currently have three different models; The S-HYPE model for Sweden; The BALT-HYPE model for the Baltic Sea; and the E-HYPE model for the whole Europe. These models simulate hydrological conditions and nutrients for their respective areas and are used for characterization, forecasts, and scenario analyses. Model data can be downloaded from hypeweb.smhi.se. In addition, we provide models for the Arctic region, the Arab (Middle East and Northern Africa) region, India, the Niger River basin, the La Plata Basin. This demonstrates the applicability of the HYPE model for large scale modeling in different regions of the world. An important goal with our work is to make our data and tools available as open data and services. For this aim we created the HYPE Open Source Community (OSC) that makes the source code of HYPE available for anyone interested in further development of HYPE. The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modeling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code are delivered frequently. HYPE OSC is open to everyone interested in hydrology, hydrological modeling and code development - e.g. scientists, authorities, and consultancies. By joining the HYPE OSC you get access a state-of-the-art operational hydrological model. The HYPE source code is designed to efficiently handle large scale modeling for forecast, hindcast and climate applications. The code is under constant development to improve the hydrological processes, efficiency and readability. In the beginning of 2013 we released a version with new and better modularization based on hydrological processes. This will make the code easier to understand and further develop for a new user. An important challenge in this process is to produce code that is easy for anyone to understand and work with, but still maintain the properties that make the code efficient enough for large scale applications. Input from the HYPE Open Source Community is an important source for future improvements of the HYPE model. Therefore, by joining the community you become an active part of the development, get access to the latest features and can influence future versions of the model.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.
2017-12-01
This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.
Fast in-memory elastic full-waveform inversion using consumer-grade GPUs
NASA Astrophysics Data System (ADS)
Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge
2017-04-01
Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Methodology of decreasing software complexity using ontology
NASA Astrophysics Data System (ADS)
DÄ browska-Kubik, Katarzyna
2015-09-01
In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Enhancements to the MCNP6 background source
McMath, Garrett E.; McKinney, Gregg W.
2015-10-19
The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter
2007-08-31
latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced
NASA Astrophysics Data System (ADS)
Huba, J. D.; Joyce, G.
2001-05-01
In the past decade, the Open Source Model for software development has gained popularity and has had numerous major achievements: emacs, Linux, the Gimp, and Python, to name a few. The basic idea is to provide the source code of the model or application, a tutorial on its use, and a feedback mechanism with the community so that the model can be tested, improved, and archived. Given the success of the Open Source Model, we believe it may prove valuable in the development of scientific research codes. With this in mind, we are `Open Sourcing' the low to mid-latitude ionospheric model that has recently been developed at the Naval Research Laboratory: SAMI2 (Sami2 is Another Model of the Ionosphere). The model is comprehensive and uses modern numerical techniques. The structure and design of SAMI2 make it relatively easy to understand and modify: the numerical algorithms are simple and direct, and the code is reasonably well-written. Furthermore, SAMI2 is designed to run on personal computers; prohibitive computational resources are not necessary, thereby making the model accessible and usable by virtually all researchers. For these reasons, SAMI2 is an excellent candidate to explore and test the open source modeling paradigm in space physics research. We will discuss various topics associated with this project. Research supported by the Office of Naval Research.
Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)
NASA Technical Reports Server (NTRS)
Warman, E. A.; Lindsey, B. A.
1972-01-01
The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.
Particle model of a cylindrical inductively coupled ion source
NASA Astrophysics Data System (ADS)
Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.
2017-08-01
In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.
Self-consistent modeling of electron cyclotron resonance ion sources
NASA Astrophysics Data System (ADS)
Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.
2004-05-01
In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy
NASA Astrophysics Data System (ADS)
Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.
2016-12-01
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.
Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M
2016-12-07
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Code TESLA for Modeling and Design of High-Power High-Efficiency Klystrons
2011-03-01
CODE TESLA FOR MODELING AND DESIGN OF HIGH - POWER HIGH -EFFICIENCY KLYSTRONS * I.A. Chernyavskiy, SAIC, McLean, VA 22102, U.S.A. S.J. Cooke, B...and multiple-beam klystrons as high - power RF sources. These sources are widely used or proposed to be used in accelerators in the future. Comparison...of TESLA modelling results with experimental data for a few multiple-beam klystrons are shown. INTRODUCTION High - power and high -efficiency
Computational techniques in gamma-ray skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, D.L.
1988-12-01
Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less
Kinetic modeling of x-ray laser-driven solid Al plasmas via particle-in-cell simulation
NASA Astrophysics Data System (ADS)
Royle, R.; Sentoku, Y.; Mancini, R. C.; Paraschiv, I.; Johzaki, T.
2017-06-01
Solid-density plasmas driven by intense x-ray free-electron laser (XFEL) radiation are seeded by sources of nonthermal photoelectrons and Auger electrons that ionize and heat the target via collisions. Simulation codes that are commonly used to model such plasmas, such as collisional-radiative (CR) codes, typically assume a Maxwellian distribution and thus instantaneous thermalization of the source electrons. In this study, we present a detailed description and initial applications of a collisional particle-in-cell code, picls, that has been extended with a self-consistent radiation transport model and Monte Carlo models for photoionization and K L L Auger ionization, enabling the fully kinetic simulation of XFEL-driven plasmas. The code is used to simulate two experiments previously performed at the Linac Coherent Light Source investigating XFEL-driven solid-density Al plasmas. It is shown that picls-simulated pulse transmissions using the Ecker-Kröll continuum-lowering model agree much better with measurements than do simulations using the Stewart-Pyatt model. Good quantitative agreement is also found between the time-dependent picls results and those of analogous simulations by the CR code scfly, which was used in the analysis of the experiments to accurately reproduce the observed K α emissions and pulse transmissions. Finally, it is shown that the effects of the nonthermal electrons are negligible for the conditions of the particular experiments under investigation.
Constructing graph models for software system development and analysis
NASA Astrophysics Data System (ADS)
Pogrebnoy, Andrey V.
2017-01-01
We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.
Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol
2016-01-03
This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication
Time Resolved Phonon Spectroscopy, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goett, Johnny; Zhu, Brian
TRPS code was developed for the project "Time Resolved Phonon Spectroscopy". Routines contained in this piece of software were specially created to model phonon generation and tracking within materials that interact with ionizing radiation, particularly applicable to the modeling of cryogenic radiation detectors for dark matter and neutrino research. These routines were created to link seamlessly with the open source Geant4 framework for the modeling of radiation transport in matter, with the explicit intent of open sourcing them for eventual integration into that code base.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
Predicting Attack-Prone Components with Source Code Static Analyzers
2009-05-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
NASA Technical Reports Server (NTRS)
Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward
2007-01-01
A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.
2016-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).
MCNP capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less
Electron-beam-ion-source (EBIS) modeling progress at FAR-TECH, Inc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J. S., E-mail: kim@far-tech.com; Zhao, L., E-mail: kim@far-tech.com; Spencer, J. A., E-mail: kim@far-tech.com
FAR-TECH, Inc. has been developing a numerical modeling tool for Electron-Beam-Ion-Sources (EBISs). The tool consists of two codes. One is the Particle-Beam-Gun-Simulation (PBGUNS) code to simulate a steady state electron beam and the other is the EBIS-Particle-In-Cell (EBIS-PIC) code to simulate ion charge breeding with the electron beam. PBGUNS, a 2D (r,z) electron gun and ion source simulation code, has been extended for efficient modeling of EBISs and the work was presented previously. EBIS-PIC is a space charge self-consistent PIC code and is written to simulate charge breeding in an axisymmetric 2D (r,z) device allowing for full three-dimensional ion dynamics.more » This 2D code has been successfully benchmarked with Test-EBIS measurements at Brookhaven National Laboratory. For long timescale (< tens of ms) ion charge breeding, the 2D EBIS-PIC simulations take a long computational time making the simulation less practical. Most of the EBIS charge breeding, however, may be modeled in 1D (r) as the axial dependence of the ion dynamics may be ignored in the trap. Where 1D approximations are valid, simulations of charge breeding in an EBIS over long time scales become possible, using EBIS-PIC together with PBGUNS. Initial 1D results are presented. The significance of the magnetic field to ion dynamics, ion cooling effects due to collisions with neutral gas, and the role of Coulomb collisions are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Alternative modeling methods for plasma-based Rf ion sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less
Alternative modeling methods for plasma-based Rf ion sources.
Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C
2016-02-01
Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.
Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki
2015-03-01
This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.
Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M
2009-09-30
QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.
Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J
2013-04-21
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
NASA Astrophysics Data System (ADS)
Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.
2013-04-01
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
NASA Astrophysics Data System (ADS)
Han, B. X.; Welton, R. F.; Stockli, M. P.; Luciano, N. P.; Carmichael, J. R.
2008-02-01
Beam simulation codes PBGUNS, SIMION, and LORENTZ-3D were evaluated by modeling the well-diagnosed SNS base line ion source and low energy beam transport (LEBT) system. Then, an investigation was conducted using these codes to assist our ion source and LEBT development effort which is directed at meeting the SNS operational and also the power-upgrade project goals. A high-efficiency H- extraction system as well as magnetic and electrostatic LEBT configurations capable of transporting up to 100mA is studied using these simulation tools.
FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0
Durbin, Timothy J.; Bond, Linda D.
1998-01-01
This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Charley; Kamboj, Sunita; Wang, Cheng
2015-09-01
This handbook is an update of the 1993 version of the Data Collection Handbook and the Radionuclide Transfer Factors Report to support modeling the impact of radioactive material in soil. Many new parameters have been added to the RESRAD Family of Codes, and new measurement methodologies are available. A detailed review of available parameter databases was conducted in preparation of this new handbook. This handbook is a companion document to the user manuals when using the RESRAD (onsite) and RESRAD-OFFSITE code. It can also be used for RESRAD-BUILD code because some of the building-related parameters are included in this handbook.more » The RESRAD (onsite) has been developed for implementing U.S. Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), crops and livestock, human intake, source characteristic, and building characteristic parameters are used in the RESRAD (onsite) code. The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code and can also model the transport of radionuclides to locations outside the footprint of the primary contamination. This handbook discusses parameter definitions, typical ranges, variations, and measurement methodologies. It also provides references for sources of additional information. Although this handbook was developed primarily to support the application of RESRAD Family of Codes, the discussions and values are valid for use of other pathway analysis models and codes.« less
MELCOR/CONTAIN LMR Implementation Report-Progress FY15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.; Louie, David L.Y.
2016-01-01
This report describes the progress of the CONTAIN-LMR sodium physics and chemistry models to be implemented in to MELCOR 2.1. It also describes the progress to implement these models into CONT AIN 2 as well. In the past two years, the implementation included the addition of sodium equations of state and sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laborat ory by modifying MELCOR to include liquid lithium equation of state as a working fluid to mode l the nuclear fusion safety research. The second source uses properties generatedmore » for the SIMMER code. Testing and results from this implementation of sodium pr operties are given. In addition, the CONTAIN-LMR code was derived from an early version of C ONTAIN code. Many physical models that were developed sin ce this early version of CONTAIN are not captured by this early code version. Therefore, CONTAIN 2 is being updated with the sodium models in CONTAIN-LMR in or der to facilitate verification of these models with the MELCOR code. Although CONTAIN 2, which represents the latest development of CONTAIN, now contains ma ny of the sodium specific models, this work is not complete due to challenges from the lower cell architecture in CONTAIN 2, which is different from CONTAIN- LMR. This implementation should be completed in the coming year, while sodi um models from C ONTAIN-LMR are being integrated into MELCOR. For testing, CONTAIN decks have been developed for verification and validation use. In terms of implementing the sodium m odels into MELCOR, a separate sodium model branch was created for this document . Because of massive development in the main stream MELCOR 2.1 code and the require ment to merge the latest code version into this branch, the integration of the s odium models were re-directed to implement the sodium chemistry models first. This change led to delays of the actual implementation. For aid in the future implementation of sodium models, a new sodium chemistry package was created. Thus reporting for the implementation of the sodium chemistry is discussed in this report.« less
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
SiC JFET Transistor Circuit Model for Extreme Temperature Range
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.
2008-01-01
A technique for simulating extreme-temperature operation of integrated circuits that incorporate silicon carbide (SiC) junction field-effect transistors (JFETs) has been developed. The technique involves modification of NGSPICE, which is an open-source version of the popular Simulation Program with Integrated Circuit Emphasis (SPICE) general-purpose analog-integrated-circuit-simulating software. NGSPICE in its unmodified form is used for simulating and designing circuits made from silicon-based transistors that operate at or near room temperature. Two rapid modifications of NGSPICE source code enable SiC JFETs to be simulated to 500 C using the well-known Level 1 model for silicon metal oxide semiconductor field-effect transistors (MOSFETs). First, the default value of the MOSFET surface potential must be changed. In the unmodified source code, this parameter has a value of 0.6, which corresponds to slightly more than half the bandgap of silicon. In NGSPICE modified to simulate SiC JFETs, this parameter is changed to a value of 1.6, corresponding to slightly more than half the bandgap of SiC. The second modification consists of changing the temperature dependence of MOSFET transconductance and saturation parameters. The unmodified NGSPICE source code implements a T(sup -1.5) temperature dependence for these parameters. In order to mimic the temperature behavior of experimental SiC JFETs, a T(sup -1.3) temperature dependence must be implemented in the NGSPICE source code. Following these two simple modifications, the Level 1 MOSFET model of the NGSPICE circuit simulation program reasonably approximates the measured high-temperature behavior of experimental SiC JFETs properly operated with zero or reverse bias applied to the gate terminal. Modification of additional silicon parameters in the NGSPICE source code was not necessary to model experimental SiC JFET current-voltage performance across the entire temperature range from 25 to 500 C.
LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations
NASA Astrophysics Data System (ADS)
Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton
2016-12-01
Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.
Exploration of Uncertainty in Glacier Modelling
NASA Technical Reports Server (NTRS)
Thompson, David E.
1999-01-01
There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
Domestic Ice Breaking Simulation Model User Guide
2012-04-01
Temperatures” sub-module. Notes on Ice Data Sources Selected Historical Ice Data *** D9 Historical (SIGRID Coded) NBL Waterways * D9 Waterway...numbers in NBL scheme D9 Historical Ice Data (Feet Thickness) Main Model Waterways * SIGRID code conversion to feet of ice thickness D9 Historical Ice Data...Feet Thickness) NBL Waterways * SIGRID codes Years for Ice Data ** Types of Ice Waterway Time Selected Ice and Weather Data Years DOMICE Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria N.; Salko, Robert K.
Coolant-Boiling in Rod Arrays|Two Fluids (COBRA-TF) is a thermal/ hydraulic (T/H) simulation code designed for light water reactor (LWR) vessel analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of 9 conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and had been used and modified by several institutions over the last few decades. COBRA-TF also found use at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG) and has been improved, updated, andmore » subsequently re-branded as CTF. As part of the improvement process, it was necessary to generate sufficient documentation for the open-source code which had lacked such material upon being adopted by RDFMG. This document serves mainly as a theory manual for CTF, detailing the many two-phase heat transfer, drag, and important accident scenario models contained in the code as well as the numerical solution process utilized. Coding of the models is also discussed, all with consideration for updates that have been made when transitioning from COBRA-TF to CTF. Further documentation outside of this manual is also available at RDFMG which focus on code input deck generation and source code global variable and module listings.« less
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
Study of information transfer optimization for communication satellites
NASA Technical Reports Server (NTRS)
Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.
1973-01-01
The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.
CMCpy: Genetic Code-Message Coevolution Models in Python
Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Simulation study on ion extraction from electron cyclotron resonance ion sources
NASA Astrophysics Data System (ADS)
Fu, S.; Kitagawa, A.; Yamada, S.
1994-04-01
In order to study beam optics of NIRS-ECR ion source used in the HIMAC project, the EGUN code has been modified to make it capable of modeling ion extraction from a plasma. Two versions of the modified code are worked out with two different methods in which 1D and 2D sheath theories are used, respectively. Convergence problem of the strong nonlinear self-consistent equations is investigated. Simulations on NIRS-ECR ion source and HYPER-ECR ion source are presented in this paper, exhibiting an agreement with the experiment results.
MELCOR/CONTAIN LMR Implementation Report - FY16 Progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
2016-11-01
This report describes the progress of the CONTAIN - LMR sodium physics and chemistry models to be implemented in MELCOR 2.1. In the past three years , the implementation included the addition of sodium equations of state and sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laboratory by modifying MELCOR to include liquid lithium equation of state as a working fluid to model the nuclear fusion safety research. The second source uses properties generated for the SIMMER code. The implemented modeling has been tested and results are reported inmore » this document. In addition, the CONTAIN - LMR code was derived from an early version of the CONTAIN code, and many physical models that were developed since this early version of CONTAIN are not available in this early code version. Therefore, CONTAIN 2 has been updated with the sodium models in CONTAIN - LMR as CONTAIN2 - LMR, which may be used to provide code-to-code comparison with CONTAIN - LMR and MELCOR when the sodium chemistry models from CONTAIN - LMR have been completed. Both the spray fire and pool fire chemistry routines from CONTAIN - LMR have been integrated into MELCOR 2.1, and debugging and testing are in progress. Because MELCOR only models the equation of state for liquid and gas phases of the coolant, a modeling gap still exists when dealing with experiments or accident conditions that take place when the ambient temperature is below the freezing point of sodium. An alternative method is under investigation to overcome this gap . We are no longer working on the separate branch from the main branch of MELCOR 2.1 since the major modeling of MELCOR 2.1 has been completed. At the current stage, the newly implemented sodium chemistry models will be a part of the main MELCOR release version (MELCOR 2.2). This report will discuss the accomplishments and issues relating to the implementation. Also, we will report on the planned completion of all remaining tasks in the upcoming FY2017, including the atmospheric chemistry model and sodium - concrete interaction model implementation .« less
HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization
NASA Astrophysics Data System (ADS)
LAVIE, Baptiste
2015-12-01
Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, T.D. Jr.
1996-05-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
Neural Coding of Relational Invariance in Speech: Human Language Analogs to the Barn Owl.
ERIC Educational Resources Information Center
Sussman, Harvey M.
1989-01-01
The neuronal model shown to code sound-source azimuth in the barn owl by H. Wagner et al. in 1987 is used as the basis for a speculative brain-based human model, which can establish contrastive phonetic categories to solve the problem of perception "non-invariance." (SLD)
RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, S.L.; Miller, L.A.; Monroe, D.K.
1998-04-01
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
NASA Astrophysics Data System (ADS)
Winfrey, A. Leigh
Electrothermal plasma sources have numerous applications including hypervelocity launchers, fusion reactor pellet injection, and space propulsion systems. The time evolution of important plasma parameters at the source exit is important in determining the suitability of the source for different applications. In this study a capillary discharge code has been modified to incorporate non-ideal behavior by using an exact analytical model for the Coulomb logarithm in the plasma electrical conductivity formula. Actual discharge currents from electrothermal plasma experiments were used and code results for both ideal and non-ideal plasma models were compared to experimental data, specifically the ablated mass from the capillary and the electrical conductivity as measured by the discharge current and the voltage. Electrothermal plasma sources operating in the ablation-controlled arc regime use discharge currents with pulse lengths between 100 micros to 1 ms. Faster or longer or extended flat-top pulses can also be generated to satisfy various applications of ET sources. Extension of the peak current for up to an additional 1000 micros was tested. Calculations for non-ideal and ideal plasma models show that extended flattop pulses produce more ablated mass, which scales linearly with increased pulse length while other parameters remain almost constant. A new configuration of the PIPE source has been proposed in order to investigate the formation of plasmas from mixed materials. The electrothermal segmented plasma source can be used for studies related to surface coatings, surface modification, ion implantation, materials synthesis, and the physics of complex mixed plasmas. This source is a capillary discharge where the ablation liner is made from segments of different materials instead of a single sleeve. This system should allow for the modeling and characterization of the growth plasma as it provides all materials needed for fabrication through the same method. An ablation-free capillary discharge computer code has been developed to model plasma flow and acceleration of pellets for fusion fueling in magnetic fusion reactors. Two case studies with and without ablation, including different source configurations have been studied here. Velocities necessary for fusion fueling have been achieved. New additions made to the code model incorporate radial heat and energy transfer and move ETFLOW towards being a 2-D model of the plasma flow. This semi 2-D approach gives a view of the behavior of the plasma inside the capillary as it is affected by important physical parameters such as radial thermal heat conduction and their effect on wall ablation.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
NASA Technical Reports Server (NTRS)
Meyer, H. D.
1993-01-01
The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.
The pros and cons of code validation
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1988-01-01
Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.
NASA Astrophysics Data System (ADS)
Andre, R.; Carlsson, J.; Gorelenkova, M.; Jardin, S.; Kaye, S.; Poli, F.; Yuan, X.
2016-10-01
TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT- SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP incorporates high fidelity heating and current drive source models, such as NUBEAM for neutral beam injection, the beam tracing code TORBEAM for EC, TORIC for ICRF, the ray tracing TORAY and GENRAY for EC. The implementation of selected components makes efficient use of MPI for speed up of code calculations. Recently the GENRAY-CQL3D solver for modeling of LH heating and current drive has been implemented and currently being extended to multiple antennas, to allow modeling of EAST discharges. Also, GENRAY+CQL3D is being extended to the use of EC/EBW and of HHFW for NSTX-U. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Work supported by the US Department of Energy under DE-AC02-CH0911466.
Ray-tracing 3D dust radiative transfer with DART-Ray: code upgrade and public release
NASA Astrophysics Data System (ADS)
Natale, Giovanni; Popescu, Cristina C.; Tuffs, Richard J.; Clarke, Adam J.; Debattista, Victor P.; Fischera, Jörg; Pasetto, Stefano; Rushton, Mark; Thirlwall, Jordan J.
2017-11-01
We present an extensively updated version of the purely ray-tracing 3D dust radiation transfer code DART-Ray. The new version includes five major upgrades: 1) a series of optimizations for the ray-angular density and the scattered radiation source function; 2) the implementation of several data and task parallelizations using hybrid MPI+OpenMP schemes; 3) the inclusion of dust self-heating; 4) the ability to produce surface brightness maps for observers within the models in HEALPix format; 5) the possibility to set the expected numerical accuracy already at the start of the calculation. We tested the updated code with benchmark models where the dust self-heating is not negligible. Furthermore, we performed a study of the extent of the source influence volumes, using galaxy models, which are critical in determining the efficiency of the DART-Ray algorithm. The new code is publicly available, documented for both users and developers, and accompanied by several programmes to create input grids for different model geometries and to import the results of N-body and SPH simulations. These programmes can be easily adapted to different input geometries, and for different dust models or stellar emission libraries.
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8
2015-08-15
We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs
2005-05-24
source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in
PFLOTRAN-RepoTREND Source Term Comparison Summary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, Jennifer M.
Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.
Scoping Calculations of Power Sources for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Difilippo, F. C.
1994-01-01
This technical memorandum describes models and calculational procedures to fully characterize the nuclear island of power sources for nuclear electric propulsion. Two computer codes were written: one for the gas-cooled NERVA derivative reactor and the other for liquid metal-cooled fuel pin reactors. These codes are going to be interfaced by NASA with the balance of plant in order to make scoping calculations for mission analysis.
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
OpenFOAM: Open source CFD in research and industry
NASA Astrophysics Data System (ADS)
Jasak, Hrvoje
2009-12-01
The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
International Natural Gas Model 2011, Model Documentation Report
2013-01-01
This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
A universal Model-R Coupler to facilitate the use of R functions for model calibration and analysis
Wu, Yiping; Liu, Shuguang; Yan, Wende
2014-01-01
Mathematical models are useful in various fields of science and engineering. However, it is a challenge to make a model utilize the open and growing functions (e.g., model inversion) on the R platform due to the requirement of accessing and revising the model's source code. To overcome this barrier, we developed a universal tool that aims to convert a model developed in any computer language to an R function using the template and instruction concept of the Parameter ESTimation program (PEST) and the operational structure of the R-Soil and Water Assessment Tool (R-SWAT). The developed tool (Model-R Coupler) is promising because users of any model can connect an external algorithm (written in R) with their model to implement various model behavior analyses (e.g., parameter optimization, sensitivity and uncertainty analysis, performance evaluation, and visualization) without accessing or modifying the model's source code.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Overview of the ArbiTER edge plasma eigenvalue code
NASA Astrophysics Data System (ADS)
Baver, Derek; Myra, James; Umansky, Maxim
2011-10-01
The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.
NASA Technical Reports Server (NTRS)
Eklund, Dean R.; Northam, G. B.; Mcdaniel, J. C.; Smith, Cliff
1992-01-01
A CFD (Computational Fluid Dynamics) competition was held at the Third Scramjet Combustor Modeling Workshop to assess the current state-of-the-art in CFD codes for the analysis of scramjet combustors. Solutions from six three-dimensional Navier-Stokes codes were compared for the case of staged injection of air behind a step into a Mach 2 flow. This case was investigated experimentally at the University of Virginia and extensive in-stream data was obtained. Code-to-code comparisons have been made with regard to both accuracy and efficiency. The turbulence models employed in the solutions are believed to be a major source of discrepancy between the six solutions.
Real time wind farm emulation using SimWindFarm toolbox
NASA Astrophysics Data System (ADS)
Topor, Marcel
2016-06-01
This paper presents a wind farm emulation solution using an open source Matlab/Simulink toolbox and the National Instruments cRIO platform. This work is based on the Aeolus SimWindFarm (SWF) toolbox models developed at Aalborg university, Denmark. Using the Matlab Simulink models developed in SWF, the modeling code can be exported to a real time model using the NI Veristand model framework and the resulting code is integrated as a hardware in the loop control on the NI 9068 platform.
Industrial Demand Module - NEMS Documentation
2014-01-01
Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.
MELCOR/CONTAIN LMR Implementation Report. FY14 Progress
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L; Louie, David L.Y.
2014-10-01
This report describes the preliminary implementation of the sodium thermophysical properties and the design documentation for the sodium models of CONTAIN-LMR to be implemented into MELCOR 2.1. In the past year, the implementation included two separate sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laboratory by modifying MELCOR to include liquid lithium equation of state as a working fluid to model the nuclear fusion safety research. To minimize the impact to MELCOR, the implementation of the fusion safety database (FSD) was done by utilizing the detection of the datamore » input file as a way to invoking the FSD. The FSD methodology has been adapted currently for this work, but it may subject modification as the project continues. The second source uses properties generated for the SIMMER code. Preliminary testing and results from this implementation of sodium properties are given. In this year, the design document for the CONTAIN-LMR sodium models, such as the two condensable option, sodium spray fire, and sodium pool fire is being developed. This design document is intended to serve as a guide for the MELCOR implementation. In addition, CONTAIN-LMR code used was based on the earlier version of CONTAIN code. Many physical models that were developed since this early version of CONTAIN may not be captured by the code. Although CONTAIN 2, which represents the latest development of CONTAIN, contains some sodium specific models, which are not complete, the utilizing CONTAIN 2 with all sodium models implemented from CONTAIN-LMR as a comparison code for MELCOR should be done. This implementation should be completed in early next year, while sodium models from CONTAIN-LMR are being integrated into MELCOR. For testing, CONTAIN decks have been developed for verification and validation use.« less
Robustness of Feedback Systems with Several Modelling Errors
1990-06-01
Patterson AFB, OH 45433-6553 to help us maintain a current mailing list. Copies of this report should not be returned unless return is required by security...Wright Research (If applicable) and Development Center WRDC/FIGC F33615-88-C-3601 8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS...feedback systems with several sources of modelling uncertainty. We assume that each source of uncertainty is modelled as a stable unstructured
Transportation Sector Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.
Utilization of recently developed codes for high power Brayton and Rankine cycle power systems
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
1993-01-01
Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.
Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations
NASA Astrophysics Data System (ADS)
Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.
2017-12-01
A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.
Open-Source as a strategy for operational software - the case of Enki
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Bruland, Oddbjørn
2014-05-01
Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has implications for building and maintaining competence around the source code and the advanced hydrological and statistical routines in Enki. Originally developed by hydrologists, the Enki code is now approaching a state where maintenance requires a background in professional software development. Without the advantage of proprietary source code, both hydrologic improvements and software maintenance depend on donations or development support on a case-to-case basis, a situation well known within the open source community. It remains to see whether these mechanisms suffice to keep Enki at the maintenance level required by the hydropower sector. ENKI is available from www.opensource-enki.org.
Support Center for Regulatory Atmospheric Modeling (SCRAM)
This technical site provides access to air quality models (including computer code, input data, and model processors) and other mathematical simulation techniques used in assessing air emissions control strategies and source impacts.
Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun
2018-09-01
Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
Simulation of partially coherent light propagation using parallel computing devices
NASA Astrophysics Data System (ADS)
Magalhães, Tiago C.; Rebordão, José M.
2017-08-01
Light acquires or loses coherence and coherence is one of the few optical observables. Spectra can be derived from coherence functions and understanding any interferometric experiment is also relying upon coherence functions. Beyond the two limiting cases (full coherence or incoherence) the coherence of light is always partial and it changes with propagation. We have implemented a code to compute the propagation of partially coherent light from the source plane to the observation plane using parallel computing devices (PCDs). In this paper, we restrict the propagation in free space only. To this end, we used the Open Computing Language (OpenCL) and the open-source toolkit PyOpenCL, which gives access to OpenCL parallel computation through Python. To test our code, we chose two coherence source models: an incoherent source and a Gaussian Schell-model source. In the former case, we divided into two different source shapes: circular and rectangular. The results were compared to the theoretical values. Our implemented code allows one to choose between the PyOpenCL implementation and a standard one, i.e using the CPU only. To test the computation time for each implementation (PyOpenCL and standard), we used several computer systems with different CPUs and GPUs. We used powers of two for the dimensions of the cross-spectral density matrix (e.g. 324, 644) and a significant speed increase is observed in the PyOpenCL implementation when compared to the standard one. This can be an important tool for studying new source models.
Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations
NASA Astrophysics Data System (ADS)
Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET
2017-09-01
The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
NASA Astrophysics Data System (ADS)
Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.
2018-05-01
Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.
JSPAM: A restricted three-body code for simulating interacting galaxies
NASA Astrophysics Data System (ADS)
Wallin, J. F.; Holincheck, A. J.; Harvey, A.
2016-07-01
Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.
Power Balance and Impurity Studies in TCS
NASA Astrophysics Data System (ADS)
Grossnickle, J. A.; Pietrzyk, Z. A.; Vlases, G. C.
2003-10-01
A "zero-dimension" power balance model was developed based on measurements of absorbed power, radiated power, absolute D_α, temperature, and density for the TCS device. Radiation was determined to be the dominant source of power loss for medium to high density plasmas. The total radiated power was strongly correlated with the Oxygen line radiation. This suggests Oxygen is the dominant radiating species, which was confirmed by doping studies. These also extrapolate to a Carbon content below 1.5%. Determining the source of the impurities is an important question that must be answered for the TCS upgrade. Preliminary indications are that the primary sources of Oxygen are the stainless steel end cones. A Ti gettering system is being installed to reduce this Oxygen source. A field line code has been developed for use in tracking where open field lines terminate on the walls. Output from this code is also used to generate grids for an impurity tracking code.
Plasma separation process. Betacell (BCELL) code, user's manual
NASA Astrophysics Data System (ADS)
Taherzadeh, M.
1987-11-01
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.
Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir
2009-11-01
Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
Airport-Noise Levels and Annoyance Model (ALAMO) system's reference manual
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
The airport-noise levels and annoyance model (ALAMO) is described in terms of the constituent modules, the execution of ALAMO procedure files, necessary for system execution, and the source code documentation associated with code development at Langley Research Center. The modules constituting ALAMO are presented both in flow graph form, and through a description of the subroutines and functions that comprise them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-09
PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.
Parallelisation study of a three-dimensional environmental flow model
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank
2014-03-01
There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.
Residential Demand Module - NEMS Documentation
2017-01-01
Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; Abdelaziz, Omar; Qu, Ming
This paper introduces a first-order physics-based model that accounts for the fundamental heat and mass transfer between a humid-air vapor stream on feed side to another flow stream on permeate side. The model comprises a few optional submodels for membrane mass transport; and it adopts a segment-by-segment method for discretizing heat and mass transfer governing equations for flow streams on feed and permeate sides. The model is able to simulate both dehumidifiers and energy recovery ventilators in parallel-flow, cross-flow, and counter-flow configurations. The predicted tresults are compared reasonably well with the measurements. The open-source codes are written in C++. Themore » model and open-source codes are expected to become a fundament tool for the analysis of membrane-based dehumidification in the future.« less
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. Themore » code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.« less
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Delayed photo-emission model for beam optics codes
Jensen, Kevin L.; Petillo, John J.; Panagos, Dimitrios N.; ...
2016-11-22
Future advanced light sources and x-ray Free Electron Lasers require fast response from the photocathode to enable short electron pulse durations as well as pulse shaping, and so the ability to model delays in emission is needed for beam optics codes. The development of a time-dependent emission model accounting for delayed photoemission due to transport and scattering is given, and its inclusion in the Particle-in-Cell code MICHELLE results in changes to the pulse shape that are described. Furthermore, the model is applied to pulse elongation of a bunch traversing an rf injector, and to the smoothing of laser jitter onmore » a short pulse.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J; Rodgers, A; Walter, W
2011-10-18
The Source Physics Experiment (SPE) is planning a 1000 kg (TNT equivalent) shot (SPE2) at the Nevada National Security Site (NNSS) in a granite borehole at a depth (canister centroid) of 45 meters. This shot follows an earlier shot of 100 kg in the same borehole at a depth 60 m. Surrounding the shotpoint is an extensive array of seismic sensors arrayed in 5 radial lines extending out 2 km to the north and east and approximately 10-15 to the south and west. Prior to SPE1, simulations using a finite difference code and a 3D numerical model based on themore » geologic setting were conducted, which predicted higher amplitudes to the south and east in the alluvium of Yucca Flat along with significant energy on the transverse components caused by scattering within the 3D volume along with some contribution by topographic scattering. Observations from the SPE1 shot largely confirmed these predictions although the ratio of transverse energy relative to the vertical and radial components was in general larger than predicted. A new set of simulations has been conducted for the upcoming SPE2 shot. These include improvements to the velocity model based on SPE1 observations as well as new capabilities added to the simulation code. The most significant is the addition of a new source model within the finite difference code by using the predicted ground velocities from a hydrodynamic code (GEODYN) as driving condition on the boundaries of a cube embedded within WPP which provides a more sophisticated source modeling capability linked directly to source site materials (e.g. granite) and type and size of source. Two sets of SPE2 simulations are conducted, one with a GEODYN source and 3D complex media (no topography node spacing of 5 m) and one with a standard isotropic pre-defined time function (3D complex media with topography, node spacing of 5 m). Results were provided as time series at specific points corresponding to sensor locations for both translational (x,y,z) and rotational components. Estimates of spectral scaling for SPE2 are provided using a modified version of the Mueller-Murphy model. An estimate of expected aftershock probabilities were also provided, based on the methodology of Ford and Walter, [2010].« less
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
NASA Astrophysics Data System (ADS)
Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan
2005-12-01
Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Michael J.; Niemeyer, Jackie M.
Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
NASA Technical Reports Server (NTRS)
Choo, Y. K.; Staiger, P. J.
1982-01-01
The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.
General Mission Analysis Tool (GMAT) Architectural Specification. Draft
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Conway, Darrel, J.
2007-01-01
Early in 2002, Goddard Space Flight Center (GSFC) began to identify requirements for the flight dynamics software needed to fly upcoming missions that use formations of spacecraft to collect data. These requirements ranged from low level modeling features to large scale interoperability requirements. In 2003 we began work on a system designed to meet these requirement; this system is GMAT. The General Mission Analysis Tool (GMAT) is a general purpose flight dynamics modeling tool built on open source principles. The GMAT code is written in C++, and uses modern C++ constructs extensively. GMAT can be run through either a fully functional Graphical User Interface (GUI) or as a command line program with minimal user feedback. The system is built and runs on Microsoft Windows, Linux, and Macintosh OS X platforms. The GMAT GUI is written using wxWidgets, a cross platform library of components that streamlines the development and extension of the user interface Flight dynamics modeling is performed in GMAT by building components that represent the players in the analysis problem that is being modeled. These components interact through the sequential execution of instructions, embodied in the GMAT Mission Sequence. A typical Mission Sequence will model the trajectories of a set of spacecraft evolving over time, calculating relevant parameters during this propagation, and maneuvering individual spacecraft to maintain a set of mission constraints as established by the mission analyst. All of the elements used in GMAT for mission analysis can be viewed in the GMAT GUI or through a custom scripting language. Analysis problems modeled in GMAT are saved as script files, and these files can be read into GMAT. When a script is read into the GMAT GUI, the corresponding user interface elements are constructed in the GMAT GUI. The GMAT system was developed from the ground up to run in a platform agnostic environment. The source code compiles on numerous different platforms, and is regularly exercised running on Windows, Linux and Macintosh computers by the development and analysis teams working on the project. The system can be run using either a graphical user interface, written using the open source wxWidgets framework, or from a text console. The GMAT source code was written using open source tools. GSFC has released the code using the NASA open source license.
A new free and open source tool for space plasma modeling.
NASA Astrophysics Data System (ADS)
Honkonen, I. J.
2014-12-01
I will present a new distributed memory parallel, free and open source computational model for studying space plasma. The model is written in C++ with emphasis on good software development practices and code readability without sacrificing serial or parallel performance. As such the model could be especially useful for education, for learning both (magneto)hydrodynamics (MHD) and computational model development. By using latest features of the C++ standard (2011) it has been possible to develop a very modular program which improves not only the readability of code but also the testability of the model and decreases the effort required to make changes to various parts of the program. Major parts of the model, functionality not directly related to (M)HD, have been outsourced to other freely available libraries which has reduced the development time of the model significantly. I will present an overview of the code architecture as well as details of different parts of the model and will show examples of using the model including preparing input files and plotting results. A multitude of 1-, 2- and 3-dimensional test cases are included in the software distribution and the results of, for example, Kelvin-Helmholtz, bow shock, blast wave and reconnection tests, will be presented.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Astrophysics Data System (ADS)
Chitsomboon, Tawit
1992-02-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Fast Model Generalized Pseudopotential Theory Interatomic Potential Routine
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-03-18
MGPT is an unclassified source code for the fast evaluation and application of quantum-based MGPT interatomic potentials for mrtals. The present version of MGPT has been developed entirely at LLNL, but is specifically designed for implementation in the open-source molecular0dynamics code LAMMPS maintained by Sandia National Laboratories. Using MGPT in LAMMPS, with separate input potential data, one can perform large-scale atomistic simulations of the structural, thermodynamic, defeat and mechanical properties of transition metals with quantum-mechanical realism.
World Energy Projection System Plus Model Documentation: Coal Module
2011-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Transportation Module
2017-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Residential Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Refinery Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Main Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Electricity Module
2017-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
Simulation of the hybrid and steady state advanced operating modes in ITER
NASA Astrophysics Data System (ADS)
Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.
2007-09-01
Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.
NASA Astrophysics Data System (ADS)
Alvanos, Michail; Christoudias, Theodoros
2017-10-01
This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.
Towards Holography via Quantum Source-Channel Codes.
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-14
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Towards Holography via Quantum Source-Channel Codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
DREAM-3D and the importance of model inputs and boundary conditions
NASA Astrophysics Data System (ADS)
Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue
2015-04-01
Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.
The HYPE Open Source Community
NASA Astrophysics Data System (ADS)
Strömbäck, Lena; Arheimer, Berit; Pers, Charlotta; Isberg, Kristina
2013-04-01
The Hydrological Predictions for the Environment (HYPE) model is a dynamic, semi-distributed, process-based, integrated catchment model (Lindström et al., 2010). It uses well-known hydrological and nutrient transport concepts and can be applied for both small and large scale assessments of water resources and status. In the model, the landscape is divided into classes according to soil type, vegetation and altitude. The soil representation is stratified and can be divided in up to three layers. Water and substances are routed through the same flow paths and storages (snow, soil, groundwater, streams, rivers, lakes) considering turn-over and transformation on the way towards the sea. In Sweden, the model is used by water authorities to fulfil the Water Framework Directive and the Marine Strategy Framework Directive. It is used for characterization, forecasts, and scenario analyses. Model data can be downloaded for free from three different HYPE applications: Europe (www.smhi.se/e-hype), Baltic Sea basin (www.smhi.se/balt-hype), and Sweden (vattenweb.smhi.se) The HYPE OSC (hype.sourceforge.net) is an open source initiative under the Lesser GNU Public License taken by SMHI to strengthen international collaboration in hydrological modelling and hydrological data production. The hypothesis is that more brains and more testing will result in better models and better code. The code is transparent and can be changed and learnt from. New versions of the main code will be delivered frequently. The main objective of the HYPE OSC is to provide public access to a state-of-the-art operational hydrological model and to encourage hydrologic expertise from different parts of the world to contribute to model improvement. HYPE OSC is open to everyone interested in hydrology, hydrological modelling and code development - e.g. scientists, authorities, and consultancies. The HYPE Open Source Community was initiated in November 2011 by a kick-off and workshop with 50 eager participants from twelve different countries. In beginning of 2013 we will release a new version of the code featuring new and better modularization, corresponding to hydrological processes which will make the code easier to understand and further develop. During 2013 we also plan a new workshop and HYPE course for everyone interested in the community. Lindström, G., Pers, C.P., Rosberg, R., Strömqvist, J., Arheimer, B. 2010. Development and test of the HYPE (Hydrological Predictions for the Environment) model - A water quality model for different spatial scales. Hydrology Research 41.3-4:295-319
TEA: A Code Calculating Thermochemical Equilibrium Abundances
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.
1999-01-01
The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.
GeNN: a code generation framework for accelerated brain simulations
NASA Astrophysics Data System (ADS)
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
GeNN: a code generation framework for accelerated brain simulations.
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-07
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.
GeNN: a code generation framework for accelerated brain simulations
Yavuz, Esin; Turner, James; Nowotny, Thomas
2016-01-01
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369
Software Model Checking of ARINC-653 Flight Code with MCP
NASA Technical Reports Server (NTRS)
Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud
2010-01-01
The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.
Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral
NASA Technical Reports Server (NTRS)
Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)
2002-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
The Future of ECHO: Evaluating Open Source Possibilities
NASA Astrophysics Data System (ADS)
Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.
2012-12-01
NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.
Duct flow nonuniformities for Space Shuttle Main Engine (SSME)
NASA Technical Reports Server (NTRS)
1987-01-01
A three-duct Space Shuttle Main Engine (SSME) Hot Gas Manifold geometry code was developed for use. The methodology of the program is described, recommendations on its implementation made, and an input guide, input deck listing, and a source code listing provided. The code listing is strewn with an abundance of comments to assist the user in following its development and logic. A working source deck will be provided. A thorough analysis was made of the proper boundary conditions and chemistry kinetics necessary for an accurate computational analysis of the flow environment in the SSME fuel side preburner chamber during the initial startup transient. Pertinent results were presented to facilitate incorporation of these findings into an appropriate CFD code. The computation must be a turbulent computation, since the flow field turbulent mixing will have a profound effect on the chemistry. Because of the additional equations demanded by the chemistry model it is recommended that for expediency a simple algebraic mixing length model be adopted. Performing this computation for all or selected time intervals of the startup time will require an abundance of computer CPU time regardless of the specific CFD code selected.
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
World Energy Projection System Plus Model Documentation: Greenhouse Gases Module
2011-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Natural Gas Module
2011-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: District Heat Module
2017-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Industrial Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
2012-01-01
our own work for this discussion. DoD Instruction 5000.61 defines model validation as “the pro - cess of determining the degree to which a model and its... determined that RMAT is highly con - crete code, potentially leading to redundancies in the code itself and making RMAT more difficult to maintain...system con - ceptual models valid, and are the data used to support them adequate? (Chapters Two and Three) 2. Are the sources and methods for populating
NASA Astrophysics Data System (ADS)
Bauwe, Andreas; Eckhardt, Kai-Uwe; Lennartz, Bernd
2017-04-01
Eutrophication is still one of the main environmental problems in the Baltic Sea. Currently, agricultural diffuse sources constitute the major portion of phosphorus (P) fluxes to the Baltic Sea and have to be reduced to achieve the HELCOM targets and improve the ecological status. Eco-hydrological models are suitable tools to identify sources of nutrients and possible measures aiming at reducing nutrient loads into surface waters. In this study, the Soil and Water Assessment Tool (SWAT) was applied to the Warnow river basin (3300 km2), the second largest watershed in Germany discharging into the Baltic Sea. The Warnow river basin is located in northeastern Germany and characterized by lowlands with a high proportion of artificially drained areas. The aim of this study were (i) to estimate P loadings for individual flow fractions (point sources, surface runoff, tile flow, groundwater flow), spatially distributed on sub-basin scale. Since the official version of SWAT does not allow for the modeling of P in tile drains, we tested (ii) two different approaches of simulating P in tile drains by changing the SWAT source code. The SWAT source code was modified so that (i) the soluble P concentration of the groundwater was transferred to the tile water and (ii) the soluble P in the soil was transferred to the tiles. The SWAT model was first calibrated (2002-2011) and validated (1992-2001) for stream flow at 7 headwater catchments at a daily time scale. Based on this, the stream flow at the outlet of the Warnow river basin was simulated. Performance statistics indicated at least satisfactory model results for each sub-basin. Breaking down the discharge into flow constituents, it becomes visible that stream flow is mainly governed by groundwater and tile flow. Due to the topographic situation with gentle slopes, surface runoff played only a minor role. Results further indicate that the prediction of soluble P loads was improved by the modified SWAT versions. Major sources of P in rivers are groundwater and tile flow. P was also released by surface runoff during large storm events when sediment was eroded into the rivers. The contributions of point sources in terms of waste water treatment plants to the overall P loading were low. The modifications made in the SWAT source code should be considered as a starting point to simulate P loads in artificially drained landscapes more precisely. Further testing and development of the code is required.
1989-06-01
and ZIP Code ) 10 SOURCE OF FUNDING NU MBERS I O KUI PROGRAM PRO ECCT TASKWOKUI E L E M E N T N O . N O .I 1 2 0 N O A 5 A C C E S S I O N N OlI I1 TITLE... source of by-products formation. Generating Data for Mathematical Modeling of Real Vapor Phase Reaction Systems (tremendously speeds multi -level, multi ...SMCC-RI1 6c AD RS(Ciry,. State, and ZIP Code ) SCRRI 7b. ADDRESS (City, State, and ZIP Code ) IA!hrueýýt Proving Ground, MD 21010-54213 a.NMOFFUNI.DNG
Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M
2018-05-17
Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hickey, M. S.
2008-05-01
Controlled-source electromagnetic geophysical methods provide a noninvasive means of characterizing subsurface structure. In order to properly model the geologic subsurface with a controlled-source time domain electromagnetic (TDEM) system in an extreme topographic environment we must first see the effects of topography on the forward model data. I run simulations using the Texas A&M University (TAMU) finite element (FEM) code in which I include true 3D topography. From these models we see the limits of how much topography we can include before our forward model can no longer give us accurate data output. The simulations are based on a model of a geologic half space with no cultural noise and focus on topography changes associated with impact crater sites, such as crater rims and central uplift. Several topographical variations of the model are run but the main constant is that there is only a small conductivity change on the range of 10-1 s/m between the host medium and the geologic body within. Asking the following questions will guide us through determining the limits of our code: What is the maximum step we can have before we see fringe effects in our data? At what location relative to the body does the topography cause the most effect? After we know the limits of the code we can develop new methods to increase the limits that will allow us to better image the subsurface using TDEM in extreme topography.
Modeling Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taherzadeh, M.
1987-11-13
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
Electron transport model of dielectric charging
NASA Technical Reports Server (NTRS)
Beers, B. L.; Hwang, H. C.; Lin, D. L.; Pine, V. W.
1979-01-01
A computer code (SCCPOEM) was assembled to describe the charging of dielectrics due to irradiation by electrons. The primary purpose for developing the code was to make available a convenient tool for studying the internal fields and charge densities in electron-irradiated dielectrics. The code, which is based on the primary electron transport code POEM, is applicable to arbitrary dielectrics, source spectra, and current time histories. The code calculations are illustrated by a series of semianalytical solutions. Calculations to date suggest that the front face electric field is insufficient to cause breakdown, but that bulk breakdown fields can easily be exceeded.
NASA Astrophysics Data System (ADS)
Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.
2017-11-01
The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles A. Wemple; Joshua J. Cogliati
2005-04-01
A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random numbermore » generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.« less
NASA Astrophysics Data System (ADS)
Nijssen, B.; Hamman, J.; Bohn, T. J.
2015-12-01
The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.
BOREAS RSS-4 1994 Jack Pine Leaf Biochemistry and Modeled Spectra in the SSA
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Plummer, Stephen; Lucas, Neil; Dawson, Terry
2000-01-01
The BOREAS RSS-4 team focused its efforts on deriving estimates of LAI and leaf chlorophyll and nitrogen concentrations from remotely sensed data for input into the Forest BGC model. This data set contains measurements of jack pine (Pinus banksiana) needle biochemistry from the BOREAS SSA in July and August 1994. The data contain measurements of current and year-1 needle chlorophyll, nitrogen, lignin, cellulose, and water content for the OJP flux tower and nearby auxiliary sites. The data have been used to test a needle reflectance and transmittance model, LIBERTY (Dawson et al., in press). The source code for the model and modeled needle spectra for each of the sampled tower and auxiliary sites are provided as part of this data set. The LIBERTY model was developed and the predicted spectral data generated to parameterize a canopy reflectance model (North, 1996) for comparison with AVIRIS, POLDER, and PARABOLA data. The data and model source code are stored in ASCII files.
Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
Global modeling of thermospheric airglow in the far ultraviolet
NASA Astrophysics Data System (ADS)
Solomon, Stanley C.
2017-07-01
The Global Airglow (GLOW) model has been updated and extended to calculate thermospheric emissions in the far ultraviolet, including sources from daytime photoelectron-driven processes, nighttime recombination radiation, and auroral excitation. It can be run using inputs from empirical models of the neutral atmosphere and ionosphere or from numerical general circulation models of the coupled ionosphere-thermosphere system. It uses a solar flux module, photoelectron generation routine, and the Nagy-Banks two-stream electron transport algorithm to simultaneously handle energetic electron distributions from photon and auroral electron sources. It contains an ion-neutral chemistry module that calculates excited and ionized species densities and the resulting airglow volume emission rates. This paper describes the inputs, algorithms, and code structure of the model and demonstrates example outputs for daytime and auroral cases. Simulations of far ultraviolet emissions by the atomic oxygen doublet at 135.6 nm and the molecular nitrogen Lyman-Birge-Hopfield bands, as viewed from geostationary orbit, are shown, and model calculations are compared to limb-scan observations by the Global Ultraviolet Imager on the TIMED satellite. The GLOW model code is provided to the community through an open-source academic research license.
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.
2013-12-01
A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.
Reproducibility and Transparency in Ocean-Climate Modeling
NASA Astrophysics Data System (ADS)
Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.
2015-12-01
Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.
SEQassembly: A Practical Tools Program for Coding Sequences Splicing
NASA Astrophysics Data System (ADS)
Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming
CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
Python-Assisted MODFLOW Application and Code Development
NASA Astrophysics Data System (ADS)
Langevin, C.
2013-12-01
The U.S. Geological Survey (USGS) has a long history of developing and maintaining free, open-source software for hydrological investigations. The MODFLOW program is one of the most popular hydrologic simulation programs released by the USGS, and it is considered to be the most widely used groundwater flow simulation code. MODFLOW was written using a modular design and a procedural FORTRAN style, which resulted in code that could be understood, modified, and enhanced by many hydrologists. The code is fast, and because it uses standard FORTRAN it can be run on most operating systems. Most MODFLOW users rely on proprietary graphical user interfaces for constructing models and viewing model results. Some recent efforts, however, have focused on construction of MODFLOW models using open-source Python scripts. Customizable Python packages, such as FloPy (https://code.google.com/p/flopy), can be used to generate input files, read simulation results, and visualize results in two and three dimensions. Automating this sequence of steps leads to models that can be reproduced directly from original data and rediscretized in space and time. Python is also being used in the development and testing of new MODFLOW functionality. New packages and numerical formulations can be quickly prototyped and tested first with Python programs before implementation in MODFLOW. This is made possible by the flexible object-oriented design capabilities available in Python, the ability to call FORTRAN code from Python, and the ease with which linear systems of equations can be solved using SciPy, for example. Once new features are added to MODFLOW, Python can then be used to automate comprehensive regression testing and ensure reliability and accuracy of new versions prior to release.
MODEST: A Tool for Geodesy and Astronomy
NASA Technical Reports Server (NTRS)
Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.
2004-01-01
Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
AQUATOX Frequently Asked Questions
Capabilities, Installation, Source Code, Example Study Files, Biotic State Variables, Initial Conditions, Loadings, Volume, Sediments, Parameters, Libraries, Ecotoxicology, Waterbodies, Link to Watershed Models, Output, Metals, Troubleshooting
SIMULATION MODEL FOR WATERSHED MANAGEMENT PLANNING. VOLUME 2. MODEL USER MANUAL
This report provides a user manual for the hydrologic, nonpoint source pollution simulation of the generalized planning model for evaluating forest and farming management alternatives. The manual contains an explanation of application of specific code and indicates changes that s...
A power-efficient communication system between brain-implantable devices and external computers.
Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui
2007-01-01
In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
The Astrophysics Source Code Library: An Update
NASA Astrophysics Data System (ADS)
Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.
2012-01-01
The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir; O'Malley, Daniel; Lin, Youzuo
2016-07-01
Mads.jl (Model analysis and decision support in Julia) is a code that streamlines the process of using data and models for analysis and decision support. It is based on another open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11- 035). Mads.jl can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. It enables a number of data- and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. The code also can use a series of alternative adaptive computational techniques for Bayesian sampling, Monte Carlo,more » and Bayesian Information-Gap Decision Theory. The code is implemented in the Julia programming language, and has high-performance (parallel) and memory management capabilities. The code uses a series of third party modules developed by others. The code development will also include contributions to the existing third party modules written in Julia; this contributions will be important for the efficient implementation of the algorithm used by Mads.jl. The code also uses a series of LANL developed modules that are developed by Dan O'Malley; these modules will be also a part of the Mads.jl release. Mads.jl will be released under GPL V3 license. The code will be distributed as a Git repo at gitlab.com and github.com. Mads.jl manual and documentation will be posted at madsjulia.lanl.gov.« less
NASA Astrophysics Data System (ADS)
Cotté, B.
2018-05-01
This study proposes to couple a source model based on Amiet's theory and a parabolic equation code in order to model wind turbine noise emission and propagation in an inhomogeneous atmosphere. Two broadband noise generation mechanisms are considered, namely trailing edge noise and turbulent inflow noise. The effects of wind shear and atmospheric turbulence are taken into account using the Monin-Obukhov similarity theory. The coupling approach, based on the backpropagation method to preserve the directivity of the aeroacoustic sources, is validated by comparison with an analytical solution for the propagation over a finite impedance ground in a homogeneous atmosphere. The influence of refraction effects is then analyzed for different directions of propagation. The spectrum modification related to the ground effect and the presence of a shadow zone for upwind receivers are emphasized. The validity of the point source approximation that is often used in wind turbine noise propagation models is finally assessed. This approximation exaggerates the interference dips in the spectra, and is not able to correctly predict the amplitude modulation.
Status and Plans for the TRANSP Interpretive and Predictive Simulation Code
NASA Astrophysics Data System (ADS)
Kaye, Stanley; Andre, Robert; Marina, Gorelenkova; Yuan, Xingqui; Hawryluk, Richard; Jardin, Steven; Poli, Francesca
2015-11-01
TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT_SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP also incorporates such source models as NUBEAM for neutral beam injection, GENRAY, TORAY, TORBEAM, TORIC and CQL3D for ICRH, LHCD, ECH and HHFW. The implementation of selected components makes efficient use of MPI for speed up of code calculations. TRANSP has a wide international user-base, and it is run on the FusionGrid to allow for timely support and quick turnaround by the PPPL Computational Plasma Physics Group. It is being used as a basis for both analysis and development of control algorithms and discharge operational scenarios, including simulation of ITER plasmas. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Progress on implementing TRANSP as a component in the ITER IMAS will also be described. This research was supported by the U.S. Department of Energy under contracts DE-AC02-09CH11466.
The APS SASE FEL : modeling and code comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biedron, S. G.
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
An Infrastructure for UML-Based Code Generation Tools
NASA Astrophysics Data System (ADS)
Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.
The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.
Authorship Attribution of Source Code
ERIC Educational Resources Information Center
Tennyson, Matthew F.
2013-01-01
Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Xu, X
Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less
Chibani, Omar; Li, X Allen
2002-05-01
Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
Tests of Exoplanet Atmospheric Radiative Transfer Codes
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Challener, Ryan; DeLarme, Emerson; Cubillos, Patricio; Blecic, Jasmina; Foster, Austin; Garland, Justin
2016-10-01
Atmospheric radiative transfer codes are used both to predict planetary spectra and in retrieval algorithms to interpret data. Observational plans, theoretical models, and scientific results thus depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. In the process of writing our own code, we became aware of several others with artifacts of unknown origin and even outright errors in their spectra. We present a series of tests to verify atmospheric radiative-transfer codes. These include: simple, single-line line lists that, when combined with delta-function abundance profiles, should produce a broadened line that can be verified easily; isothermal atmospheres that should produce analytically-verifiable blackbody spectra at the input temperatures; and model atmospheres with a range of complexities that can be compared to the output of other codes. We apply the tests to our own code, Bayesian Atmospheric Radiative Transfer (BART) and to several other codes. The test suite is open-source software. We propose this test suite as a standard for verifying current and future radiative transfer codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Commercial Demand Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
NASA Astrophysics Data System (ADS)
Rielly, Matthew Robert
An existing numerical model (known as the Bergen code) is used to investigate finite amplitude ultrasound propagation through multiple layers of tissue-like media. This model uses a finite difference method to solve the nonlinear parabolic KZK wave equation. The code is modified to include an arbitrary frequency dependence of absorption and transmission effects for wave propagation across a plane interface at normal incidence. In addition the code is adapted to calculate the total intensity loss associated with the absorption of the fundamental and nonlinearly generated harmonics. Measurements are also taken of the axial nonlinear pressure field generated from a circular focused, 2.25 MHz source, through single and multiple layered tissue mimicking fluids, for source pressures in the range from 13 kPa to 310 kPa. Two tissue mimicking fluids are developed to provide acoustic properties similar to amniotic fluid and a typical soft tissue. The values of the nonlinearity parameter, sound velocity and frequency dependence of attenuation for both fluids are presented, and the measurement procedures employed to obtain these characteristics are described in detail. These acoustic parameters, together with the measured source conditions are used as input to the numerical model, allowing the experimental conditions to be simulated. Extensive comparisons are made between the model's predictions and the axial pressure field measurements. Results are presented in the frequency domain showing the fundamental and three subsequent harmonic amplitudes on axis, as a function of axial distance. These show that significant nonlinear distortion can occur through media with characteristics typical of tissue. Time domain waveform comparisons are also made. An excellent agreement is found between theory and experiment indicating that the model can be used to predict nonlinear ultrasound propagation through multiple layers of tissue-like media. The numerical code is also used to model the intensity loss through layered tissue mimics and results are presented illustrating the effects of altering the layered medium on the magnitude and spatial distribution of intensity loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portmann, Greg; /LBL, Berkeley; Safranek, James
The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less
Schroedinger’s code: Source code availability and transparency in astrophysics
NASA Astrophysics Data System (ADS)
Ryan, PW; Allen, Alice; Teuben, Peter
2018-01-01
Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.
Matrix factorization-based data fusion for the prediction of lncRNA-disease associations.
Fu, Guangyuan; Wang, Jun; Domeniconi, Carlotta; Yu, Guoxian
2018-05-01
Long non-coding RNAs (lncRNAs) play crucial roles in complex disease diagnosis, prognosis, prevention and treatment, but only a small portion of lncRNA-disease associations have been experimentally verified. Various computational models have been proposed to identify lncRNA-disease associations by integrating heterogeneous data sources. However, existing models generally ignore the intrinsic structure of data sources or treat them as equally relevant, while they may not be. To accurately identify lncRNA-disease associations, we propose a Matrix Factorization based LncRNA-Disease Association prediction model (MFLDA in short). MFLDA decomposes data matrices of heterogeneous data sources into low-rank matrices via matrix tri-factorization to explore and exploit their intrinsic and shared structure. MFLDA can select and integrate the data sources by assigning different weights to them. An iterative solution is further introduced to simultaneously optimize the weights and low-rank matrices. Next, MFLDA uses the optimized low-rank matrices to reconstruct the lncRNA-disease association matrix and thus to identify potential associations. In 5-fold cross validation experiments to identify verified lncRNA-disease associations, MFLDA achieves an area under the receiver operating characteristic curve (AUC) of 0.7408, at least 3% higher than those given by state-of-the-art data fusion based computational models. An empirical study on identifying masked lncRNA-disease associations again shows that MFLDA can identify potential associations more accurately than competing models. A case study on identifying lncRNAs associated with breast, lung and stomach cancers show that 38 out of 45 (84%) associations predicted by MFLDA are supported by recent biomedical literature and further proves the capability of MFLDA in identifying novel lncRNA-disease associations. MFLDA is a general data fusion framework, and as such it can be adopted to predict associations between other biological entities. The source code for MFLDA is available at: http://mlda.swu.edu.cn/codes.php? name = MFLDA. gxyu@swu.edu.cn. Supplementary data are available at Bioinformatics online.
Moment Tensor Descriptions for Simulated Explosions of the Source Physics Experiment (SPE)
NASA Astrophysics Data System (ADS)
Yang, X.; Rougier, E.; Knight, E. E.; Patton, H. J.
2014-12-01
In this research we seek to understand damage mechanisms governing the behavior of geo-materials in the explosion source region, and the role they play in seismic-wave generation. Numerical modeling tools can be used to describe these mechanisms through the development and implementation of appropriate material models. Researchers at Los Alamos National Laboratory (LANL) have been working on a novel continuum-based-viscoplastic strain-rate-dependent fracture material model, AZ_Frac, in an effort to improve the description of these damage sources. AZ_Frac has the ability to describe continuum fracture processes, and at the same time, to handle pre-existing anisotropic material characteristics. The introduction of fractures within the material generates further anisotropic behavior that is also accounted for within the model. The material model has been calibrated to a granitic medium and has been applied in a number of modeling efforts under the SPE project. In our modeling, we use a 2D, axisymmetric layered earth model of the SPE site consisting of a weathered layer on top of a half-space. We couple the hydrodynamic simulation code with a seismic simulation code and propagate the signals to distances of up to 2 km. The signals are inverted for time-dependent moment tensors using a modified inversion scheme that accounts for multiple sources at different depths. The inversion scheme is evaluated for its resolving power to determine a centroid depth and a moment tensor description of the damage source. The capabilities of the inversion method to retrieve such information from waveforms recorded on three SPE tests conducted to date are also being assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vance, J.N.; Holderness, J.H.; James, D.W.
1992-12-01
Waste stream scaling factors based on sampling programs are vulnerable to one or more of the following factors: sample representativeness, analytic accuracy, and measurement sensitivity. As an alternative to sample analyses or as a verification of the sampling results, this project proposes the use of the RADSOURCE code, which accounts for the release of fuel-source radionuclides. Once the release rates of these nuclides from fuel are known, the code develops scaling factors for waste streams based on easily measured Cobalt-60 (Co-60) and Cesium-137 (Cs-137). The project team developed mathematical models to account for the appearance rate of 10CFR61 radionuclides inmore » reactor coolant. They based these models on the chemistry and nuclear physics of the radionuclides involved. Next, they incorporated the models into a computer code that calculates plant waste stream scaling factors based on reactor coolant gamma- isotopic data. Finally, the team performed special sampling at 17 reactors to validate the models in the RADSOURCE code.« less
Multi-scale modeling of irradiation effects in spallation neutron source materials
NASA Astrophysics Data System (ADS)
Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.
2011-07-01
Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Daniel; Vesselinov, Velimir V.
MADSpython (Model analysis and decision support tools in Python) is a code in Python that streamlines the process of using data and models for analysis and decision support using the code MADS. MADS is open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11-035). MADS can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. The Python scripts in MADSpython facilitate the generation of input and output file needed by MADS as wells as the external simulators which include FEHM and PFLOTRAN. MADSpython enables a number of data-more » and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. MADSpython will be released under GPL V3 license. MADSpython will be distributed as a Git repo at gitlab.com and github.com. MADSpython manual and documentation will be posted at http://madspy.lanl.gov.« less
Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.
2015-01-01
The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Scott Carlton; Roberts, Jesse D.
2014-03-01
This document describes the marine hydrokinetic (MHK) input file and subroutines for the Sandia National Laboratories Environmental Fluid Dynamics Code (SNL-EFDC), which is a combined hydrodynamic, sediment transport, and water quality model based on the Environmental Fluid Dynamics Code (EFDC) developed by John Hamrick [1], formerly sponsored by the U.S. Environmental Protection Agency, and now maintained by Tetra Tech, Inc. SNL-EFDC has been previously enhanced with the incorporation of the SEDZLJ sediment dynamics model developed by Ziegler, Lick, and Jones [2-4]. SNL-EFDC has also been upgraded to more accurately simulate algae growth with specific application to optimizing biomass in anmore » open-channel raceway for biofuels production [5]. A detailed description of the input file containing data describing the MHK device/array is provided, along with a description of the MHK FORTRAN routine. Both a theoretical description of the MHK dynamics as incorporated into SNL-EFDC and an explanation of the source code are provided. This user manual is meant to be used in conjunction with the original EFDC [6] and sediment dynamics SNL-EFDC manuals [7]. Through this document, the authors provide information for users who wish to model the effects of an MHK device (or array of devices) on a flow system with EFDC and who also seek a clear understanding of the source code, which is available from staff in the Water Power Technologies Department at Sandia National Laboratories, Albuquerque, New Mexico.« less
NASA Technical Reports Server (NTRS)
Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.
2016-01-01
Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.
Kieffer, Michael J; Vukovic, Rose K
2012-01-01
Drawing on the cognitive and ecological domains within the componential model of reading, this longitudinal study explores heterogeneity in the sources of reading difficulties for language minority learners and native English speakers in urban schools. Students (N = 150) were followed from first through third grade and assessed annually on standardized English language and reading measures. Structural equation modeling was used to investigate the relative contributions of code-related and linguistic comprehension skills in first and second grade to third grade reading comprehension. Linguistic comprehension and the interaction between linguistic comprehension and code-related skills each explained substantial variation in reading comprehension. Among students with low reading comprehension, more than 80% demonstrated weaknesses in linguistic comprehension alone, whereas approximately 15% demonstrated weaknesses in both linguistic comprehension and code-related skills. Results were remarkably similar for the language minority learners and native English speakers, suggesting the importance of their shared socioeconomic backgrounds and schooling contexts.
Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, S.R.; Kamm, J.R.
1993-11-01
The purpose of the meeting was to discuss the state-of-the-art in numerical simulations of nuclear explosion phenomenology with applications to test ban monitoring. We focused on the uniqueness of model fits to data, the measurement and characterization of material response models, advanced modeling techniques, and applications of modeling to monitoring problems. The second goal of the symposium was to establish a dialogue between seismologists and explosion-source code calculators. The meeting was divided into five main sessions: explosion source phenomenology, material response modeling, numerical simulations, the seismic source, and phenomenology from near source to far field. We feel the symposium reachedmore » many of its goals. Individual papers submitted at the conference are indexed separately on the data base.« less
A Mercury Model of Atmospheric Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Alex B.; Chodash, Perry A.; Procassini, R. J.
Using the particle transport code Mercury, accurate models were built of the two sources used in Operation BREN, a series of radiation experiments performed by the United States during the 1960s. In the future, these models will be used to validate Mercury’s ability to simulate atmospheric transport.
NASA Astrophysics Data System (ADS)
Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.
2014-12-01
The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the strong impact of the new generation GMPEs on the seismic hazard estimates. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard Assessment (2003-2009) for the Italian Building Code. Bull. Seismol. Soc. Am. 101, 1885-1911.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Scalable nanohelices for predictive studies and enhanced 3D visualization.
Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P
2014-11-12
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Duggan, Dennis M
2004-12-01
Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-01
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.
Using SPARK as a Solver for Modelica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Wetter, Michael; Haves, Philip
Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrescu, Eugene; Humble, Travis S.
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
THE HYDROCARBON SPILL SCREENING MODEL (HSSM), VOLUME 2: THEORETICAL BACKGROUND AND SOURCE CODES
A screening model for subsurface release of a nonaqueous phase liquid which is less dense than water (LNAPL) is presented. The model conceptualizes the release as consisting of 1) vertical transport from near the surface to the capillary fringe, 2) radial spreading of an LNAPL l...
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Onai, M; Etoh, H; Aoki, Y; Shibata, T; Mattei, S; Fujita, S; Hatayama, A; Lettry, J
2016-02-01
Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H(-) production. The modelling results reasonably explains the dependence of the H(-) extraction current on the arc-discharge power in the experiments.
Mars Global Reference Atmospheric Model 2010 Version: Users Guide
NASA Technical Reports Server (NTRS)
Justh, H. L.
2014-01-01
This Technical Memorandum (TM) presents the Mars Global Reference Atmospheric Model 2010 (Mars-GRAM 2010) and its new features. Mars-GRAM is an engineering-level atmospheric model widely used for diverse mission applications. Applications include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Additionally, this TM includes instructions on obtaining the Mars-GRAM source code and data files as well as running Mars-GRAM. It also contains sample Mars-GRAM input and output files and an example of how to incorporate Mars-GRAM as an atmospheric subroutine in a trajectory code.
Continuation of research into language concepts for the mission support environment: Source code
NASA Technical Reports Server (NTRS)
Barton, Timothy J.; Ratner, Jeremiah M.
1991-01-01
Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments
NASA Astrophysics Data System (ADS)
Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon
2013-04-01
A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C
2018-01-01
This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.
Status and future plans for open source QuickPIC
NASA Astrophysics Data System (ADS)
An, Weiming; Decyk, Viktor; Mori, Warren
2017-10-01
QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git
Coupling hydrodynamic and wave propagation modeling for waveform modeling of SPE.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Steedman, D. W.; Rougier, E.; Delorey, A.; Bradley, C. R.
2015-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. This paper presents effort to improve knowledge of the processes that affect seismic wave propagation from the hydrodynamic/plastic source region to the elastic/anelastic far field thanks to numerical modeling. The challenge is to couple the prompt processes that take place in the near source region to the ones taking place later in time due to wave propagation in complex 3D geologic environments. In this paper, we report on results of first-principles simulations coupling hydrodynamic simulation codes (Abaqus and CASH), with a 3D full waveform propagation code, SPECFEM3D. Abaqus and CASH model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. LANL has been recently employing a Coupled Euler-Lagrange (CEL) modeling capability. This has allowed the testing of a new phenomenological model for modeling stored shear energy in jointed material. This unique modeling capability has enabled highfidelity modeling of the explosive, the weak grout-filled borehole, as well as the surrounding jointed rock. SPECFEM3D is based on the Spectral Element Method, a direct numerical method for full waveform modeling with mathematical accuracy (e.g. Komatitsch, 1998, 2002) thanks to its use of the weak formulation of the wave equation and of high-order polynomial functions. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. Displacement time series at these points are computed from output of CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests and waveforms modeled for several SPE tests conducted so far, with a special focus on effect of the local topography.
Chaste: An Open Source C++ Library for Computational Physiology and Biology
Mirams, Gary R.; Arthurs, Christopher J.; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Corrias, Alberto; Davit, Yohan; Dunn, Sara-Jane; Fletcher, Alexander G.; Harvey, Daniel G.; Marsh, Megan E.; Osborne, James M.; Pathmanathan, Pras; Pitt-Francis, Joe; Southern, James; Zemzemi, Nejib; Gavaghan, David J.
2013-01-01
Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials. PMID:23516352
Design pattern mining using distributed learning automata and DNA sequence alignment.
Esmaeilpour, Mansour; Naderifar, Vahideh; Shukur, Zarina
2014-01-01
Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem. This paper describes a new method for pattern mining, isolating design patterns and relationship between them; and a related tool, DLA-DNA for all implemented pattern and all projects used for evaluation. DLA-DNA achieves acceptable precision and recall instead of other evaluated tools based on distributed learning automata (DLA) and deoxyribonucleic acid (DNA) sequences alignment. The proposed method mines structural design patterns in the object oriented source code and extracts the strong and weak relationships between them, enabling analyzers and programmers to determine the dependency rate of each object, component, and other section of the code for parameter passing and modular programming. The proposed model can detect design patterns better that available other tools those are Pinot, PTIDEJ and DPJF; and the strengths of their relationships. The result demonstrate that whenever the source code is build standard and non-standard, based on the design patterns, then the result of the proposed method is near to DPJF and better that Pinot and PTIDEJ. The proposed model is tested on the several source codes and is compared with other related models and available tools those the results show the precision and recall of the proposed method, averagely 20% and 9.6% are more than Pinot, 27% and 31% are more than PTIDEJ and 3.3% and 2% are more than DPJF respectively. The primary idea of the proposed method is organized in two following steps: the first step, elemental design patterns are identified, while at the second step, is composed to recognize actual design patterns.
Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N
2012-01-01
Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; ...
2017-03-20
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, whichmore » is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,‘bottom-up’ and ‘top-down’, are illustrated. Here, preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.« less
Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community
NASA Astrophysics Data System (ADS)
Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.
2016-12-01
The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.
Han, Changcai; Yang, Jinsheng
2017-10-30
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network
Han, Changcai; Yang, Jinsheng
2017-01-01
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155
Terminal Area Simulation System User's Guide - Version 10.0
NASA Technical Reports Server (NTRS)
Switzer, George F.; Proctor, Fred H.
2014-01-01
The Terminal Area Simulation System (TASS) is a three-dimensional, time-dependent, large eddy simulation model that has been developed for studies of wake vortex and weather hazards to aviation, along with other atmospheric turbulence, and cloud-scale weather phenomenology. This document describes the source code for TASS version 10.0 and provides users with needed documentation to run the model. The source code is programed in Fortran language and is formulated to take advantage of vector and efficient multi-processor scaling for execution on massively-parallel supercomputer clusters. The code contains different initialization modules allowing the study of aircraft wake vortex interaction with the atmosphere and ground, atmospheric turbulence, atmospheric boundary layers, precipitating convective clouds, hail storms, gust fronts, microburst windshear, supercell and mesoscale convective systems, tornadic storms, and ring vortices. The model is able to operate in either two- or three-dimensions with equations numerically formulated on a Cartesian grid. The primary output from the TASS is time-dependent domain fields generated by the prognostic equations and diagnosed variables. This document will enable a user to understand the general logic of TASS, and will show how to configure and initialize the model domain. Also described are the formats of the input and output files, as well as the parameters that control the input and output.
MrLavaLoba: A new probabilistic model for the simulation of lava flows as a settling process
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, Mattia; Tarquini, Simone
2018-01-01
A new code to simulate lava flow spread, MrLavaLoba, is presented. In the code, erupted lava is itemized in parcels having an elliptical shape and prescribed volume. New parcels bud from existing ones according to a probabilistic law influenced by the local steepest slope direction and by tunable input settings. MrLavaLoba must be accounted among the probabilistic codes for the simulation of lava flows, because it is not intended to mimic the actual process of flowing or to provide directly the progression with time of the flow field, but rather to guess the most probable inundated area and final thickness of the lava deposit. The code's flexibility allows it to produce variable lava flow spread and emplacement according to different dynamics (e.g. pahoehoe or channelized-'a'ā). For a given scenario, it is shown that model outputs converge, in probabilistic terms, towards a single solution. The code is applied to real cases in Hawaii and Mt. Etna, and the obtained maps are shown. The model is written in Python and the source code is available at http://demichie.github.io/MrLavaLoba/.
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
CDC Vital Signs: Preventing Norovirus Outbreaks
... source of norovirus outbreaks using genome sequencing and analysis. State and local governments can Adopt and enforce all provisions of the FDA model Food Code to better safeguard food. Investigate norovirus outbreaks ...
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley
2018-05-01
We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles
NASA Technical Reports Server (NTRS)
Fuller, L. C.; Stovall, T. K.
1979-01-01
Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
NASA Technical Reports Server (NTRS)
Staats, Matt
2009-01-01
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
Landlab: an Open-Source Python Library for Modeling Earth Surface Dynamics
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Adams, J. M.; Hobley, D. E. J.; Hutton, E.; Nudurupati, S. S.; Istanbulluoglu, E.; Tucker, G. E.
2016-12-01
Landlab is an open-source Python modeling library that enables users to easily build unique models to explore earth surface dynamics. The Landlab library provides a number of tools and functionalities that are common to many earth surface models, thus eliminating the need for a user to recode fundamental model elements each time she explores a new problem. For example, Landlab provides a gridding engine so that a user can build a uniform or nonuniform grid in one line of code. The library has tools for setting boundary conditions, adding data to a grid, and performing basic operations on the data, such as calculating gradients and curvature. The library also includes a number of process components, which are numerical implementations of physical processes. To create a model, a user creates a grid and couples together process components that act on grid variables. The current library has components for modeling a diverse range of processes, from overland flow generation to bedrock river incision, from soil wetting and drying to vegetation growth, succession and death. The code is freely available for download (https://github.com/landlab/landlab) or can be installed as a Python package. Landlab models can also be built and run on Hydroshare (www.hydroshare.org), an online collaborative environment for sharing hydrologic data, models, and code. Tutorials illustrating a wide range of Landlab capabilities such as building a grid, setting boundary conditions, reading in data, plotting, using components and building models are also available (https://github.com/landlab/tutorials). The code is also comprehensively documented both online and natively in Python. In this presentation, we illustrate the diverse capabilities of Landlab. We highlight existing functionality by illustrating outcomes from a range of models built with Landlab - including applications that explore landscape evolution and ecohydrology. Finally, we describe the range of resources available for new users.
Use of Annotations for Component and Framework Interoperability
NASA Astrophysics Data System (ADS)
David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.
2009-12-01
The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the western United States at the USDA NRCS National Water and Climate Center. PRMS is a component based modular precipitation-runoff model developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow and general basin hydrology. The new OMS 3.0 PRMS model source code is more concise and flexible as a result of using the new framework’s annotation based approach. The fully annotated components are now providing information directly for (i) model assembly and building, (ii) dataflow analysis for implicit multithreading, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks. As a prototype example, model code annotations were used to generate binding and mediation code to allow the use of OMS 3.0 model components within the OpenMI context.
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
You've Written a Cool Astronomy Code! Now What Do You Do with It?
NASA Astrophysics Data System (ADS)
Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.
2014-01-01
Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.
SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jinfeng; Cao, Ruifen; Dai, Yumei
Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less
NASA Astrophysics Data System (ADS)
Gerardy, I.; Rodenas, J.; Van Dycke, M.; Gallardo, S.; Tondeur, F.
2008-02-01
Brachytherapy is a radiotherapy treatment where encapsulated radioactive sources are introduced within a patient. Depending on the technique used, such sources can produce high, medium or low local dose rates. The Monte Carlo method is a powerful tool to simulate sources and devices in order to help physicists in treatment planning. In multiple types of gynaecological cancer, intracavitary brachytherapy (HDR Ir-192 source) is used combined with other therapy treatment to give an additional local dose to the tumour. Different types of applicators are used in order to increase the dose imparted to the tumour and to limit the effect on healthy surrounding tissues. The aim of this work is to model both applicator and HDR source in order to evaluate the dose at a reference point as well as the effect of the materials constituting the applicators on the near field dose. The MCNP5 code based on the Monte Carlo method has been used for the simulation. Dose calculations have been performed with *F8 energy deposition tally, taking into account photons and electrons. Results from simulation have been compared with experimental in-phantom dose measurements. Differences between calculations and measurements are lower than 5%.The importance of the source position has been underlined.
iGen: An automated generator of simplified models with provable error bounds.
NASA Astrophysics Data System (ADS)
Tang, D.; Dobbie, S.
2009-04-01
Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.
Discrimination of correlated and entangling quantum channels with selective process tomography
Dumitrescu, Eugene; Humble, Travis S.
2016-10-10
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.
2016-01-01
Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
ISS Destiny Laboratory Smoke Detection Model
NASA Technical Reports Server (NTRS)
Brooker, John E.; Urban, David L.; Ruff, Gary A.
2007-01-01
Smoke transport and detection were modeled numerically in the ISS Destiny module using the NIST, Fire Dynamics Simulator code. The airflows in Destiny were modeled using the existing flow conditions and the module geometry included obstructions that simulate the currently installed hardware on orbit. The smoke source was modeled as a 0.152 by 0.152 m region that emitted smoke particulate ranging from 1.46 to 8.47 mg/s. In the module domain, the smoke source was placed in the center of each Destiny rack location and the model was run to determine the time required for the two smoke detectors to alarm. Overall the detection times were dominated by the circumferential flow, the axial flow from the intermodule ventilation and the smoke source strength.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
Computational modeling in cognitive science: a manifesto for change.
Addyman, Caspar; French, Robert M
2012-07-01
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. Copyright © 2012 Cognitive Science Society, Inc.
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.
SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.
2018-03-01
We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.
Rosetta3: An Object-Oriented Software Suite for the Simulation and Design of Macromolecules
Leaver-Fay, Andrew; Tyka, Michael; Lewis, Steven M.; Lange, Oliver F.; Thompson, James; Jacak, Ron; Kaufman, Kristian; Renfrew, P. Douglas; Smith, Colin A.; Sheffler, Will; Davis, Ian W.; Cooper, Seth; Treuille, Adrien; Mandell, Daniel J.; Richter, Florian; Ban, Yih-En Andrew; Fleishman, Sarel J.; Corn, Jacob E.; Kim, David E.; Lyskov, Sergey; Berrondo, Monica; Mentzer, Stuart; Popović, Zoran; Havranek, James J.; Karanicolas, John; Das, Rhiju; Meiler, Jens; Kortemme, Tanja; Gray, Jeffrey J.; Kuhlman, Brian; Baker, David; Bradley, Philip
2013-01-01
We have recently completed a full re-architecturing of the Rosetta molecular modeling program, generalizing and expanding its existing functionality. The new architecture enables the rapid prototyping of novel protocols by providing easy to use interfaces to powerful tools for molecular modeling. The source code of this rearchitecturing has been released as Rosetta3 and is freely available for academic use. At the time of its release, it contained 470,000 lines of code. Counting currently unpublished protocols at the time of this writing, the source includes 1,285,000 lines. Its rapid growth is a testament to its ease of use. This document describes the requirements for our new architecture, justifies the design decisions, sketches out central classes, and highlights a few of the common tasks that the new software can perform. PMID:21187238
Modeling and simulation of RF photoinjectors for coherent light sources
NASA Astrophysics Data System (ADS)
Chen, Y.; Krasilnikov, M.; Stephan, F.; Gjonaj, E.; Weiland, T.; Dohlus, M.
2018-05-01
We propose a three-dimensional fully electromagnetic numerical approach for the simulation of RF photoinjectors for coherent light sources. The basic idea consists in incorporating a self-consistent photoemission model within a particle tracking code. The generation of electron beams in the injector is determined by the quantum efficiency (QE) of the cathode, the intensity profile of the driving laser as well as by the accelerating field and magnetic focusing conditions in the gun. The total charge emitted during an emission cycle can be limited by the space charge field at the cathode. Furthermore, the time and space dependent electromagnetic field at the cathode may induce a transient modulation of the QE due to surface barrier reduction of the emitting layer. In our modeling approach, all these effects are taken into account. The beam particles are generated dynamically according to the local QE of the cathode and the time dependent laser intensity profile. For the beam dynamics, a tracking code based on the Lienard-Wiechert retarded field formalism is employed. This code provides the single particle trajectories as well as the transient space charge field distribution at the cathode. As an application, the PITZ injector is considered. Extensive electron bunch emission simulations are carried out for different operation conditions of the injector, in the source limited as well as in the space charge limited emission regime. In both cases, fairly good agreement between measurements and simulations is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murata, K.K.; Williams, D.C.; Griffith, R.O.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less
A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.
Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H
2001-03-01
The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.
Exposure calculation code module for reactor core analysis: BURNER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Cunningham, G.W.
1979-02-01
The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less
2014-06-01
User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of
Transport modeling of L- and H-mode discharges with LHCD on EAST
NASA Astrophysics Data System (ADS)
Li, M. H.; Ding, B. J.; Imbeaux, F.; Decker, J.; Zhang, X. J.; Kong, E. H.; Zhang, L.; Wei, W.; Shan, J. F.; Liu, F. K.; Wang, M.; Xu, H. D.; Yang, Y.; Peysson, Y.; Basiuk, V.; Artaud, J.-F.; Yuynh, P.; Wan, B. N.
2013-04-01
High-confinement (H-mode) discharges with lower hybrid current drive (LHCD) as the only heating source are obtained on EAST. In this paper, an empirical transport model of mixed Bohm/gyro-Bohm for electron and ion heat transport was first calibrated against a database of 3 L-mode shots on EAST. The electron and ion temperature profiles are well reproduced in the predictive modeling with the calibrated model coupled to the suite of codes CRONOS. CRONOS calculations with experimental profiles are also performed for electron power balance analysis. In addition, the time evolutions of LHCD are calculated by the C3PO/LUKE code involving current diffusion, and the results are compared with experimental observations.
Modeling of negative ion transport in a plasma source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riz, David; Departement de Recherches sur la Fusion Controelee CE Cadarache, 13108 St Paul lez Durance; Pamela, Jerome
1998-08-20
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H{sup -}/H{sup +} and of charge exchange H{sup -}/H{sup 0} are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, severalmore » phenomena observed in negative ion sources, such as the isotopic H{sup -}/D{sup -} effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm{sup -3}), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of 'volume production' (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.« less
Modeling of negative ion transport in a plasma source (invited)
NASA Astrophysics Data System (ADS)
Riz, David; Paméla, Jérôme
1998-02-01
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The H-/D- trajectory is calculated by numerically solving the 3D motion equation, while the atomic processes of destruction, of elastic collision with H+/D+ and of charge exchange with H0/D0 are handled at each time step by a Monte Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have been allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that, in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided they are produced at a distance lower than 2 cm from the plasma grid in the case of volume production (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.
Modeling of negative ion transport in a plasma source
NASA Astrophysics Data System (ADS)
Riz, David; Paméla, Jérôme
1998-08-01
A code called NIETZSCHE has been developed to simulate the negative ion transport in a plasma source, from their birth place to the extraction holes. The ion trajectory is calculated by numerically solving the 3-D motion equation, while the atomic processes of destruction, of elastic collision H-/H+ and of charge exchange H-/H0 are handled at each time step by a Monte-Carlo procedure. This code can be used to calculate the extraction probability of a negative ion produced at any location inside the source. Calculations performed with NIETZSCHE have allowed to explain, either quantitatively or qualitatively, several phenomena observed in negative ion sources, such as the isotopic H-/D- effect, and the influence of the plasma grid bias or of the magnetic filter on the negative ion extraction. The code has also shown that in the type of sources contemplated for ITER, which operate at large arc power densities (>1 W cm-3), negative ions can reach the extraction region provided if they are produced at a distance lower than 2 cm from the plasma grid in the case of «volume production» (dissociative attachment processes), or if they are produced at the plasma grid surface, in the vicinity of the extraction holes.
Fostering Team Awareness in Earth System Modeling Communities
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.; Lawson, A.; Strong, S.
2009-12-01
Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.
WISE Photometry for 400 million SDSS sources
Lang, Dustin; Hogg, David W.; Schlegel, David J.
2016-01-28
Here, we present photometry of images from the Wide-Field Infrared Survey Explorer (WISE) of over 400 million sources detected by the Sloan Digital Sky Survey (SDSS). We also use a "forced photometry" technique, using measured SDSS source positions, star-galaxy classification, and galaxy profiles to define the sources whose fluxes are to be measured in the WISE images. We perform photometry with The Tractor image modeling code, working on our "unWISE" coaddds and taking account of the WISE point-spread function and a noise model. The result is a measurement of the flux of each SDSS source in each WISE band. Manymore » sources have little flux in the WISE bands, so often the measurements we report are consistent with zero given our uncertainties. But, for many sources we get 3σ or 4σ measurements; these sources would not be reported by the "official" WISE pipeline and will not appear in the WISE catalog, yet they can be highly informative for some scientific questions. In addition, these small-signal measurements can be used in stacking analyses at the catalog level. The forced photometry approach has the advantage that we measure a consistent set of sources between SDSS and WISE, taking advantage of the resolution and depth of the SDSS images to interpret the WISE images; objects that are resolved in SDSS but blended together in WISE still have accurate measurements in our photometry. Our results, and the code used to produce them, are publicly available at http://unwise.me.« less
SIGNUM: A Matlab, TIN-based landscape evolution model
NASA Astrophysics Data System (ADS)
Refice, A.; Giachetta, E.; Capolongo, D.
2012-08-01
Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.
A Flexible Cosmic Ultraviolet Background Model
NASA Astrophysics Data System (ADS)
McQuinn, Matthew
2016-10-01
HST studies of the IGM, of the CGM, and of reionization-era galaxies are all aided by ionizing background models, which are a critical input in modeling the ionization state of diffuse, 10^4 K gas. The ionization state in turn enables the determination of densities and sizes of absorbing clouds and, when applied to the Ly-a forest, the global ionizing emissivity of sources. Unfortunately, studies that use these background models have no way of gauging the amount of uncertainty in the adopted model other than to recompute their results using previous background models with outdated observational inputs. As of yet there has been no systematic study of uncertainties in the background model and there unfortunately is no publicly available ultraviolet background code. A public code would enable users to update the calculation with the latest observational constraints, and it would allow users to experiment with varying the background model's assumptions regarding emissions and absorptions. We propose to develop a publicly available ionizing background code and, as an initial application, quantify the level of uncertainty in the ionizing background spectrum across cosmic time. As the background model improves, so does our understanding of (1) the sources that dominate ionizing emissions across cosmic time and (2) the properties of diffuse gas in the circumgalactic medium, the WHIM, and the Ly-a forest. HST is the primary telescope for studying both the highest redshift galaxies and low-redshift diffuse gas. The proposed program would benefit HST studies of the Universe at z 0 all the way up to z = 10, including of high-z galaxies observed in the HST Frontier Fields.
Astronomy education and the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; Nemiroff, Robert J.
2016-01-01
The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.
System for the Analysis of Global Energy Markets - Vol. II, Model Documentation
2003-01-01
The second volume provides a data implementation guide that lists all naming conventions and model constraints. In addition, Volume 1 has two appendixes that provide a schematic of the System for the Analysis of Global Energy Markets (SAGE) structure and a listing of the source code, respectively.
Performance modeling codes for the QuakeSim problem solving environment
NASA Technical Reports Server (NTRS)
Parker, J. W.; Donnellan, A.; Lyzenga, G.; Rundle, J.; Tullis, T.
2003-01-01
The QuakeSim Problem Solving Environment uses a web-services approach to unify and deploy diverse remote data sources and processing services within a browser environment. Here we focus on the high-performance crustal modeling applications that will be included in this set of remote but interoperable applications.
Comparison of memory thresholds for planar qudit geometries
NASA Astrophysics Data System (ADS)
Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad
2017-11-01
We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.
Liu, Zhongyang; Guo, Feifei; Gu, Jiangyong; Wang, Yong; Li, Yang; Wang, Dan; Lu, Liang; Li, Dong; He, Fuchu
2015-06-01
Anatomical Therapeutic Chemical (ATC) classification system, widely applied in almost all drug utilization studies, is currently the most widely recognized classification system for drugs. Currently, new drug entries are added into the system only on users' requests, which leads to seriously incomplete drug coverage of the system, and bioinformatics prediction is helpful during this process. Here we propose a novel prediction model of drug-ATC code associations, using logistic regression to integrate multiple heterogeneous data sources including chemical structures, target proteins, gene expression, side-effects and chemical-chemical associations. The model obtains good performance for the prediction not only on ATC codes of unclassified drugs but also on new ATC codes of classified drugs assessed by cross-validation and independent test sets, and its efficacy exceeds previous methods. Further to facilitate the use, the model is developed into a user-friendly web service SPACE ( S: imilarity-based P: redictor of A: TC C: od E: ), which for each submitted compound, will give candidate ATC codes (ranked according to the decreasing probability_score predicted by the model) together with corresponding supporting evidence. This work not only contributes to knowing drugs' therapeutic, pharmacological and chemical properties, but also provides clues for drug repositioning and side-effect discovery. In addition, the construction of the prediction model also provides a general framework for similarity-based data integration which is suitable for other drug-related studies such as target, side-effect prediction etc. The web service SPACE is available at http://www.bprc.ac.cn/space. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Development and Implementation of Dynamic Scripts to Execute Cycled WRF/GSI Forecasts
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Quanli; Watson, Leela
2014-01-01
Automating the coupling of data assimilation (DA) and modeling systems is a unique challenge in the numerical weather prediction (NWP) research community. In recent years, the Development Testbed Center (DTC) has released well-documented tools such as the Weather Research and Forecasting (WRF) model and the Gridpoint Statistical Interpolation (GSI) DA system that can be easily downloaded, installed, and run by researchers on their local systems. However, developing a coupled system in which the various preprocessing, DA, model, and postprocessing capabilities are all integrated can be labor-intensive if one has little experience with any of these individual systems. Additionally, operational modeling entities generally have specific coupling methodologies that can take time to understand and develop code to implement properly. To better enable collaborating researchers to perform modeling and DA experiments with GSI, the Short-term Prediction Research and Transition (SPoRT) Center has developed a set of Perl scripts that couple GSI and WRF in a cycling methodology consistent with the use of real-time, regional observation data from the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC). Because Perl is open source, the code can be easily downloaded and executed regardless of the user's native shell environment. This paper will provide a description of this open-source code and descriptions of a number of the use cases that have been performed by SPoRT collaborators using the scripts on different computing systems.
Numerical modeling of the SNS H{sup −} ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan
Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved inmore » order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Module-oriented modeling of reactive transport with HYTEC
NASA Astrophysics Data System (ADS)
van der Lee, Jan; De Windt, Laurent; Lagneau, Vincent; Goblet, Patrick
2003-04-01
The paper introduces HYTEC, a coupled reactive transport code currently used for groundwater pollution studies, safety assessment of nuclear waste disposals, geochemical studies and interpretation of laboratory column experiments. Based on a known permeability field, HYTEC evaluates the groundwater flow paths, and simulates the migration of mobile matter (ions, organics, colloids) subject to geochemical reactions. The code forms part of a module-oriented structure which facilitates maintenance and improves coding flexibility. In particular, using the geochemical module CHESS as a common denominator for several reactive transport models significantly facilitates the development of new geochemical features which become automatically available to all models. A first example shows how the model can be used to assess migration of uranium from a sub-surface source under the effect of an oxidation front. The model also accounts for alteration of hydrodynamic parameters (local porosity, permeability) due to precipitation and dissolution of mineral phases, which potentially modifies the migration properties in general. The second example illustrates this feature.
Steady-State Ion Beam Modeling with MICHELLE
NASA Astrophysics Data System (ADS)
Petillo, John
2003-10-01
There is a need to efficiently model ion beam physics for ion implantation, chemical vapor deposition, and ion thrusters. Common to all is the need for three-dimensional (3D) simulation of volumetric ion sources, ion acceleration, and optics, with the ability to model charge exchange of the ion beam with a background neutral gas. The two pieces of physics stand out as significant are the modeling of the volumetric source and charge exchange. In the MICHELLE code, the method for modeling the plasma sheath in ion sources assumes that the electron distribution function is a Maxwellian function of electrostatic potential over electron temperature. Charge exchange is the process by which a neutral background gas with a "fast" charged particle streaming through exchanges its electron with the charged particle. An efficient method for capturing this is essential, and the model presented is based on semi-empirical collision cross section functions. This appears to be the first steady-state 3D algorithm of its type to contain multiple generations of charge exchange, work with multiple species and multiple charge state beam/source particles simultaneously, take into account the self-consistent space charge effects, and track the subsequent fast neutral particles. The solution used by MICHELLE is to combine finite element analysis with particle-in-cell (PIC) methods. The basic physics model is based on the equilibrium steady-state application of the electrostatic particle-in-cell (PIC) approximation employing a conformal computational mesh. The foundation stems from the same basic model introduced in codes such as EGUN. Here, Poisson's equation is used to self-consistently include the effects of space charge on the fields, and the relativistic Lorentz equation is used to integrate the particle trajectories through those fields. The presentation will consider the complexity of modeling ion thrusters.
Diffusive deposition of aerosols in Phebus containment during FPT-2 test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontautas, A.; Urbonavicius, E.
2012-07-01
At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Data processing with microcode designed with source coding
McCoy, James A; Morrison, Steven E
2013-05-07
Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
2010-09-01
differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.
2015-10-01
We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
Coronal Physics and the Chandra Emission Line Project
NASA Technical Reports Server (NTRS)
Brickhouse, Nancy
1999-01-01
With the launch of the Chandra X-ray Observatory, high resolution X-ray spectroscopy of cosmic sources has begun. Early, deep observations of three stellar coronal sources will provide not only invaluable calibration data, but will also give us benchmarks for plasma spectral modeling codes. These codes are to interpret data from stellar coronae, galaxies and clusters of galaxies. supernova remnants and other astrophysical sources, but they have been called into question in recent years as problems with understanding moderate resolution ASCA and EUVE data have arisen. The Emission Line Project is a collaborative effort to improve the models, with Phase 1 being the comparison of models with observed spectra of Capella, Procyon, and HR, 1099. Goals of these comparisons are (1) to determine and verify accurate and robust diagnostics and (2) to identify and prioritize issues in fundamental spectroscopy which will require further theoretical and/or laboratory work. A critical issue in exploiting the coronal data for these purposes is to understand the extent to which common simplifying assumptions (coronal equilibrium, time-independence, negligible optical depth) apply. We will discuss recent advances in our understanding of stellar coronae in this context.
Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1991-01-01
A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.
Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J
1997-01-01
To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.
Applying a Service-Oriented Architecture to Operational Flight Program Development
2007-09-01
using two Java 2 Enterprise Edition (J2EE) Web servers. The weapon models were accessed using a SUN Microsystems Java Web Services Development Pack...Oriented Architectures 22 CROSSTALK The Journal of Defense Software Engineering September 2007 tion, and Spring/ Hibernate to provide the data access...tion since a major coding effort was avoided. The majority of the effort was tweaking pre-existing Java source code and editing of eXtensible Markup
An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, with Application to WASP-12b
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio; Loredo, Thomas J.; Bowman, M. Oliver; Foster, Andrew S. D.; Stemm, Madison M.; Lust, Nate B.
2015-01-01
Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, and Application to WASP-12b
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio M.; Loredo, Thomas J.; Bowman, Matthew O.; Foster, Andrew S.; Stemm, Madison M.; Lust, Nate B.
2014-11-01
Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.; Cluggish, B.; Kim, J. S.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recentmore » charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.« less
NASA Astrophysics Data System (ADS)
Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi
2017-02-01
An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.
NASA Astrophysics Data System (ADS)
Limić, Nedzad; Valković, Vladivoj
1996-04-01
Pollution of coastal seas with toxic substances can be efficiently detected by examining toxic materials in sediment samples. These samples contain information on the overall pollution from surrounding sources such as yacht anchorages, nearby industries, sewage systems, etc. In an efficient analysis of pollution one must determine the contribution from each individual source. In this work it is demonstrated that a modelling method can be utilized for solving this latter problem. The modelling method is based on a unique interpretation of concentrations in sediments from all sampling stations. The proposed method is a synthesis consisting of the utilization of PIXE as an efficient method of pollution concentration determination and the code ANCOPOL (N. Limic and R. Benis, The computer code ANCOPOL, SimTel/msdos/geology, 1994 [1]) for the calculation of contributions from the main polluters. The efficiency and limits of the proposed method are demonstrated by discussing trace element concentrations in sediments of Punat Bay on the island of Krk in Croatia.
Python-Based Applications for Hydrogeological Modeling
NASA Astrophysics Data System (ADS)
Khambhammettu, P.
2013-12-01
Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.
NASA Astrophysics Data System (ADS)
Froger, Etienne
1993-05-01
A description of the electromagnetic behavior of a satellite subjected to an electric discharge is given using a specially developed numerical code. One of the particularities of vacuum discharges, obtained by irradiation of polymers, is the intense emission of electrons into the spacecraft environment. Electromagnetic radiation, associated with the trajectories of the particles around the spacecraft, is considered as the main source of the interference observed. In the absence of accurate orbital data and realistic ground tests, the assessment of these effects requires numerical simulation of the interaction between this electron source and the spacecraft. This is done by the GEODE particle code which is applied to characteristic configurations in order to estimate the spacecraft response to a discharge, which is simulated from a vacuum discharge model designed in laboratory. The spacecraft response to a current injection is simulated by the ALICE numerical three dimensional code. The comparison between discharge and injection effects, from the results given by the two codes, illustrates the representativity of electromagnetic susceptibility tests and the main parameters for their definition.
The Particle Accelerator Simulation Code PyORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M
2015-01-01
The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
The present, third volume of the final report is a programmer's manual for the code. It provides a listing of the FORTRAN 4 source program; a complete glossary of FORTRAN symbols; a discussion of the purpose and method of operation of each subroutine (including mathematical analyses of special algorithms); and a discussion of the operation of the code on IBM/360 and UNIVAC 1108 systems, including required control cards and the overlay structure used to accommodate the code to the limited core size of the 1108. In addition, similar information is provided to document the programming of the NOZFIT code, which is employed to set up nozzle profile curvefits for use in NATA.
HEMCO v1.0: A Versatile, ESMF-Compliant Component for Calculating Emissions in Atmospheric Models
NASA Technical Reports Server (NTRS)
Keller, C. A.; Long, M. S.; Yantosca, R. M.; Da Silva, A. M.; Pawson, S.; Jacob, D. J.
2014-01-01
We describe the Harvard-NASA Emission Component version 1.0 (HEMCO), a stand-alone software component for computing emissions in global atmospheric models. HEMCO determines emissions from different sources, regions, and species on a user-defined grid and can combine, overlay, and update a set of data inventories and scale factors, as specified by the user through the HEMCO configuration file. New emission inventories at any spatial and temporal resolution are readily added to HEMCO and can be accessed by the user without any preprocessing of the data files or modification of the source code. Emissions that depend on dynamic source types and local environmental variables such as wind speed or surface temperature are calculated in separate HEMCO extensions. HEMCO is fully compliant with the Earth System Modeling Framework (ESMF) environment. It is highly portable and can be deployed in a new model environment with only few adjustments at the top-level interface. So far, we have implemented HEMCO in the NASA Goddard Earth Observing System (GEOS-5) Earth system model (ESM) and in the GEOS-Chem chemical transport model (CTM). By providing a widely applicable framework for specifying constituent emissions, HEMCO is designed to ease sensitivity studies and model comparisons, as well as inverse modeling in which emissions are adjusted iteratively. The HEMCO code, extensions, and the full set of emissions data files used in GEOS-Chem are available at http: //wiki.geos-chem.org/HEMCO.
Prediction of Turbulence-Generated Noise in Unheated Jets. Part 2; JeNo Users' Manual (Version 1.0)
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Wolter, John D.; Koch, L. Danielle
2009-01-01
JeNo (Version 1.0) is a Fortran90 computer code that calculates the far-field sound spectral density produced by axisymmetric, unheated jets at a user specified observer location and frequency range. The user must provide a structured computational grid and a mean flow solution from a Reynolds-Averaged Navier Stokes (RANS) code as input. Turbulence kinetic energy and its dissipation rate from a k-epsilon or k-omega turbulence model must also be provided. JeNo is a research code, and as such, its development is ongoing. The goal is to create a code that is able to accurately compute far-field sound pressure levels for jets at all observer angles and all operating conditions. In order to achieve this goal, current theories must be combined with the best practices in numerical modeling, all of which must be validated by experiment. Since the acoustic predictions from JeNo are based on the mean flow solutions from a RANS code, quality predictions depend on accurate aerodynamic input.This is why acoustic source modeling, turbulence modeling, together with the development of advanced measurement systems are the leading areas of research in jet noise research at NASA Glenn Research Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onai, M., E-mail: onai@ppl.appi.keio.ac.jp; Fujita, S.; Hatayama, A.
2016-02-15
Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H{sup −} production. The modelling results reasonably explains the dependence of the H{sup −} extraction current on the arc-discharge powermore » in the experiments.« less
A generic framework for individual-based modelling and physical-biological interaction
2018-01-01
The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280
Nuclear Resonance Fluorescence for Materials Assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quiter, Brian; Ludewigt, Bernhard; Mozin, Vladimir
This paper discusses the use of nuclear resonance fluorescence (NRF) techniques for the isotopic and quantitative assaying of radioactive material. Potential applications include age-dating of an unknown radioactive source, pre- and post-detonation nuclear forensics, and safeguards for nuclear fuel cycles Examples of age-dating a strong radioactive source and assaying a spent fuel pin are discussed. The modeling work has ben performed with the Monte Carlo radiation transport computer code MCNPX, and the capability to simulate NRF has bee added to the code. Discussed are the limitations in MCNPX's photon transport physics for accurately describing photon scattering processes that are importantmore » contributions to the background and impact the applicability of the NRF assay technique.« less
The National Transport Code Collaboration Module Library
NASA Astrophysics Data System (ADS)
Kritz, A. H.; Bateman, G.; Kinsey, J.; Pankin, A.; Onjun, T.; Redd, A.; McCune, D.; Ludescher, C.; Pletzer, A.; Andre, R.; Zakharov, L.; Lodestro, L.; Pearlstein, L. D.; Jong, R.; Houlberg, W.; Strand, P.; Wiley, J.; Valanju, P.; John, H. St.; Waltz, R.; Mandrekas, J.; Mau, T. K.; Carlsson, J.; Braams, B.
2004-12-01
This paper reports on the progress in developing a library of code modules under the auspices of the National Transport Code Collaboration (NTCC). Code modules are high quality, fully documented software packages with a clearly defined interface. The modules provide a variety of functions, such as implementing numerical physics models; performing ancillary functions such as I/O or graphics; or providing tools for dealing with common issues in scientific programming such as portability of Fortran codes. Researchers in the plasma community submit code modules, and a review procedure is followed to insure adherence to programming and documentation standards. The review process is designed to provide added confidence with regard to the use of the modules and to allow users and independent reviews to validate the claims of the modules' authors. All modules include source code; clear instructions for compilation of binaries on a variety of target architectures; and test cases with well-documented input and output. All the NTCC modules and ancillary information, such as current standards and documentation, are available from the NTCC Module Library Website http://w3.pppl.gov/NTCC. The goal of the project is to develop a resource of value to builders of integrated modeling codes and to plasma physics researchers generally. Currently, there are more than 40 modules in the module library.
Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices
NASA Astrophysics Data System (ADS)
Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The
2018-01-01
ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.
Phase II Evaluation of Clinical Coding Schemes
Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith
1997-01-01
Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343
SPIDERMAN: an open-source code to model phase curves and secondary eclipses
NASA Astrophysics Data System (ADS)
Louden, Tom; Kreidberg, Laura
2018-06-01
We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.
An experimental MOSFET approach to characterize (192)Ir HDR source anisotropy.
Toye, W C; Das, K R; Todd, S P; Kenny, M B; Franich, R D; Johnston, P N
2007-09-07
The dose anisotropy around a (192)Ir HDR source in a water phantom has been measured using MOSFETs as relative dosimeters. In addition, modeling using the EGSnrc code has been performed to provide a complete dose distribution consistent with the MOSFET measurements. Doses around the Nucletron 'classic' (192)Ir HDR source were measured for a range of radial distances from 5 to 30 mm within a 40 x 30 x 30 cm(3) water phantom, using a TN-RD-50 MOSFET dosimetry system with an active area of 0.2 mm by 0.2 mm. For each successive measurement a linear stepper capable of movement in intervals of 0.0125 mm re-positioned the MOSFET at the required radial distance, while a rotational stepper enabled angular displacement of the source at intervals of 0.9 degrees . The source-dosimeter arrangement within the water phantom was modeled using the standardized cylindrical geometry of the DOSRZnrc user code. In general, the measured relative anisotropy at each radial distance from 5 mm to 30 mm is in good agreement with the EGSnrc simulations, benchmark Monte Carlo simulation and TLD measurements where they exist. The experimental approach employing a MOSFET detection system of small size, high spatial resolution and fast read out capability allowed a practical approach to the determination of dose anisotropy around a HDR source.
NASA Astrophysics Data System (ADS)
Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.
2018-04-01
We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.
Model fitting data from syllogistic reasoning experiments.
Hattori, Masasi
2016-12-01
The data presented in this article are related to the research article entitled "Probabilistic representation in syllogistic reasoning: A theory to integrate mental models and heuristics" (M. Hattori, 2016) [1]. This article presents predicted data by three signature probabilistic models of syllogistic reasoning and model fitting results for each of a total of 12 experiments ( N =404) in the literature. Models are implemented in R, and their source code is also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
A simple stochastic weather generator for ecological modeling
A.G. Birt; M.R. Valdez-Vivas; R.M. Feldman; C.W. Lafon; D. Cairns; R.N. Coulson; M. Tchakerian; W. Xi; Jim Guldin
2010-01-01
Stochastic weather generators are useful tools for exploring the relationship between organisms and their environment. This paper describes a simple weather generator that can be used in ecological modeling projects. We provide a detailed description of methodology, and links to full C++ source code (http://weathergen.sourceforge.net) required to implement or modify...
Flexible configuration-interaction shell-model many-body solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.
BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.
2013-12-01
Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.
Documenting AUTOGEN and APGEN Model Files
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.; DelGuericio, Chris c.
2008-01-01
A computer program called "autogen hypertext map generator" satisfies a need for documenting and assisting in visualization of, and navigation through, model files used in the AUTOGEN and APGEN software mentioned in the two immediately preceding articles. This program parses autogen script files, autogen model files, PERL scripts, and apgen activity-definition files and produces a hypertext map of the files to aid in the navigation of the model. This program also provides a facility for adding notes and descriptions, beyond what is in the source model represented by the hypertext map. Further, this program provides access to a summary of the model through variable, function, sub routine, activity and resource declarations as well as providing full access to the source model and source code. The use of the tool enables easy access to the declarations and the ability to traverse routines and calls while analyzing the model.
Alberti, Luca; Colombo, Loris; Formentin, Giovanni
2018-04-15
The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Held, Eric D.
2015-09-01
Neoclassical tearing modes are macroscopic (L ∼ 1 m) instabilities in magnetic fusion experiments; if unchecked, these modes degrade plasma performance and may catastrophically destroy plasma confinement by inducing a disruption. Fortunately, the use of properly tuned and directed radiofrequency waves (λ ∼ 1 mm) can eliminate these modes. Numerical modeling of this difficult multiscale problem requires the integration of separate mathematical models for each length and time scale (Jenkins and Kruger, 2012 [21]); the extended MHD model captures macroscopic plasma evolution while the RF model tracks the flow and deposition of injected RF power through the evolving plasma profiles. The scale separation enables use of the eikonal (ray-tracing) approximation to model the RF wave propagation. In this work we demonstrate a technique, based on methods of computational geometry, for mapping the ensuing RF data (associated with discrete ray trajectories) onto the finite-element/pseudospectral grid that is used to model the extended MHD physics. In the new representation, the RF data can then be used to construct source terms in the equations of the extended MHD model, enabling quantitative modeling of RF-induced tearing mode stabilization. Though our specific implementation uses the NIMROD extended MHD (Sovinec et al., 2004 [22]) and GENRAY RF (Smirnov et al., 1994 [23]) codes, the approach presented can be applied more generally to any code coupling requiring the mapping of ray tracing data onto Eulerian grids.
Implementation and Testing of Turbulence Models for the F18-HARV Simulation
NASA Technical Reports Server (NTRS)
Yeager, Jessie C.
1998-01-01
This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Comparison of the thermal neutron scattering treatment in MCNP6 and GEANT4 codes
NASA Astrophysics Data System (ADS)
Tran, H. N.; Marchix, A.; Letourneau, A.; Darpentigny, J.; Menelle, A.; Ott, F.; Schwindling, J.; Chauvin, N.
2018-06-01
To ensure the reliability of simulation tools, verification and comparison should be made regularly. This paper describes the work performed in order to compare the neutron transport treatment in MCNP6.1 and GEANT4-10.3 in the thermal energy range. This work focuses on the thermal neutron scattering processes for several potential materials which would be involved in the neutron source designs of Compact Accelerator-based Neutrons Sources (CANS), such as beryllium metal, beryllium oxide, polyethylene, graphite, para-hydrogen, light water, heavy water, aluminium and iron. Both thermal scattering law and free gas model, coming from the evaluated data library ENDF/B-VII, were considered. It was observed that the GEANT4.10.03-patch2 version was not able to account properly the coherent elastic process occurring in crystal lattice. This bug is treated in this work and it should be included in the next release of the code. Cross section sampling and integral tests have been performed for both simulation codes showing a fair agreement between the two codes for most of the materials except for iron and aluminium.
A soft X-ray source based on a low divergence, high repetition rate ultraviolet laser
NASA Astrophysics Data System (ADS)
Crawford, E. A.; Hoffman, A. L.; Milroy, R. D.; Quimby, D. C.; Albrecht, G. F.
The CORK code is utilized to evaluate the applicability of low divergence ultraviolet lasers for efficient production of soft X-rays. The use of the axial hydrodynamic code wih one ozone radial expansion to estimate radial motion and laser energy is examined. The calculation of ionization levels of the plasma and radiation rates by employing the atomic physics and radiation model included in the CORK code is described. Computations using the hydrodynamic code to determine the effect of laser intensity, spot size, and wavelength on plasma electron temperature are provided. The X-ray conversion efficiencies of the lasers are analyzed. It is observed that for a 1 GW laser power the X-ray conversion efficiency is a function of spot size, only weakly dependent on pulse length for time scales exceeding 100 psec, and better conversion efficiencies are obtained at shorter wavelengths. It is concluded that these small lasers focused to 30 micron spot sizes and 10 to the 14th W/sq cm intensities are useful sources of 1-2 keV radiation.
NASA Astrophysics Data System (ADS)
Woolsey, L. N.; Cranmer, S. R.
2013-12-01
The study of solar wind acceleration has made several important advances recently due to improvements in modeling techniques. Existing code and simulations test the competing theories for coronal heating, which include reconnection/loop-opening (RLO) models and wave/turbulence-driven (WTD) models. In order to compare and contrast the validity of these theories, we need flexible tools that predict the emergent solar wind properties from a wide range of coronal magnetic field structures such as coronal holes, pseudostreamers, and helmet streamers. ZEPHYR (Cranmer et al. 2007) is a one-dimensional magnetohydrodynamics code that includes Alfven wave generation and reflection and the resulting turbulent heating to accelerate solar wind in open flux tubes. We present the ZEPHYR output for a wide range of magnetic field geometries to show the effect of the magnetic field profiles on wind properties. We also investigate the competing acceleration mechanisms found in ZEPHYR to determine the relative importance of increased gas pressure from turbulent heating and the separate pressure source from the Alfven waves. To do so, we developed a code that will become publicly available for solar wind prediction. This code, TEMPEST, provides an outflow solution based on only one input: the magnetic field strength as a function of height above the photosphere. It uses correlations found in ZEPHYR between the magnetic field strength at the source surface and the temperature profile of the outflow solution to compute the wind speed profile based on the increased gas pressure from turbulent heating. With this initial solution, TEMPEST then adds in the Alfven wave pressure term to the modified Parker equation and iterates to find a stable solution for the wind speed. This code, therefore, can make predictions of the wind speeds that will be observed at 1 AU based on extrapolations from magnetogram data, providing a useful tool for empirical forecasting of the sol! ar wind.
An Efficient Variable Length Coding Scheme for an IID Source
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.
Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian
The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functionalmore » characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.« less
GeoFramework: A Modeling Framework for Solid Earth Geophysics
NASA Astrophysics Data System (ADS)
Gurnis, M.; Aivazis, M.; Tromp, J.; Tan, E.; Thoutireddy, P.; Liu, Q.; Choi, E.; Dicaprio, C.; Chen, M.; Simons, M.; Quenette, S.; Appelbe, B.; Aagaard, B.; Williams, C.; Lavier, L.; Moresi, L.; Law, H.
2003-12-01
As data sets in geophysics become larger and of greater relevance to other earth science disciplines, and as earth science becomes more interdisciplinary in general, modeling tools are being driven in new directions. There is now a greater need to link modeling codes to one another, link modeling codes to multiple datasets, and to make modeling software available to non modeling specialists. Coupled with rapid progress in computer hardware (including the computational speed afforded by massively parallel computers), progress in numerical algorithms, and the introduction of software frameworks, these lofty goals of merging software in geophysics are now possible. The GeoFramework project, a collaboration between computer scientists and geoscientists, is a response to these needs and opportunities. GeoFramework is based on and extends Pyre, a Python-based modeling framework, recently developed to link solid (Lagrangian) and fluid (Eulerian) models, as well as mesh generators, visualization packages, and databases, with one another for engineering applications. The utility and generality of Pyre as a general purpose framework in science is now being recognized. Besides its use in engineering and geophysics, it is also being used in particle physics and astronomy. Geology and geophysics impose their own unique requirements on software frameworks which are not generally available in existing frameworks and so there is a need for research in this area. One of the special requirements is the way Lagrangian and Eulerian codes will need to be linked in time and space within a plate tectonics context. GeoFramework has grown beyond its initial goal of linking a limited number of exiting codes together. The following codes are now being reengineered within the context of Pyre: Tecton, 3-D FE Visco-elastic code for lithospheric relaxation; CitComS, a code for spherical mantle convection; SpecFEM3D, a SEM code for global and regional seismic waves; eqsim, a FE code for dynamic earthquake rupture; SNAC, a developing 3-D coded based on the FLAC method for visco-elastoplastic deformation; SNARK, a 3-D FE-PIC method for viscoplastic deformation; and gPLATES an open source paleogeographic/plate tectonics modeling package. We will demonstrate how codes can be linked with themselves, such as a regional and global model of mantle convection and a visco-elastoplastic representation of the crust within viscous mantle flow. Finally, we will describe how http://GeoFramework.org has become a distribution site for a suite of modeling software in geophysics.
WISE PHOTOMETRY FOR 400 MILLION SDSS SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Dustin; Hogg, David W.; Schlegel, David J., E-mail: dstndstn@gmail.com
2016-02-15
We present photometry of images from the Wide-Field Infrared Survey Explorer (WISE) of over 400 million sources detected by the Sloan Digital Sky Survey (SDSS). We use a “forced photometry” technique, using measured SDSS source positions, star–galaxy classification, and galaxy profiles to define the sources whose fluxes are to be measured in the WISE images. We perform photometry with The Tractor image modeling code, working on our “unWISE” coaddds and taking account of the WISE point-spread function and a noise model. The result is a measurement of the flux of each SDSS source in each WISE band. Many sources havemore » little flux in the WISE bands, so often the measurements we report are consistent with zero given our uncertainties. However, for many sources we get 3σ or 4σ measurements; these sources would not be reported by the “official” WISE pipeline and will not appear in the WISE catalog, yet they can be highly informative for some scientific questions. In addition, these small-signal measurements can be used in stacking analyses at the catalog level. The forced photometry approach has the advantage that we measure a consistent set of sources between SDSS and WISE, taking advantage of the resolution and depth of the SDSS images to interpret the WISE images; objects that are resolved in SDSS but blended together in WISE still have accurate measurements in our photometry. Our results, and the code used to produce them, are publicly available at http://unwise.me.« less
NASA Astrophysics Data System (ADS)
van Dijk, Jan; Hartgers, Bart; van der Mullen, Joost
2006-10-01
Self-consistent modelling of plasma sources requires a simultaneous treatment of multiple physical phenomena. As a result plasma codes have a high degree of complexity. And with the growing interest in time-dependent modelling of non-equilibrium plasma in three dimensions, codes tend to become increasingly hard to explain-and-maintain. As a result of these trends there has been an increased interest in the software-engineering and implementation aspects of plasma modelling in our group at Eindhoven University of Technology. In this contribution we will present modern object-oriented techniques in C++ to solve an old problem: that of the discretisation of coupled linear(ized) equations involving multiple field variables on ortho-curvilinear meshes. The `LinSys' code has been tailored to the transport equations that occur in transport physics. The implementation has been made both efficient and user-friendly by using modern idiom like expression templates and template meta-programming. Live demonstrations will be given. The code is available to interested parties; please visit www.dischargemodelling.org.
IPOLE - semi-analytic scheme for relativistic polarized radiative transport
NASA Astrophysics Data System (ADS)
Mościbrodzka, M.; Gammie, C. F.
2018-03-01
We describe IPOLE, a new public ray-tracing code for covariant, polarized radiative transport. The code extends the IBOTHROS scheme for covariant, unpolarized transport using two representations of the polarized radiation field: In the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, IPOLE is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's GRTRANS polarized transport code on a complicated model problem. We expect IPOLE will mainly find applications in modelling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modelling for the IXPE mission.
Modeling carbon production and transport during ELMs in DIII-D
NASA Astrophysics Data System (ADS)
Hogan, J.; Wade, M.; Coster, D.; Lasnier, C.
2004-11-01
Large-scale Type I ELM events could provide a significant C source in ITER, and C production rates depend on incident D flux density and surface temperature, quantities which can vary significantly during an ELM event. Recent progress on DIII-D has improved opportunities for code comparison. Fast time-scale measurements of divertor CIII evolution [1] and fast edge CER measurements of C profile evolution during low-density DIII-D LSN ELMy H-modes (type I) [2] have been modeled using the solps5.0/Eirene99 coupled edge code and time dependent thermal analysis codes. An ELM model based on characteristics of MHD peeling-ballooning modes reproduces the pedestal evolution. Qualitative agreement for the CIII evolution during an ELM event is found using the Roth et al annealing model for chemical sputtering and the sensitivity to other models is described. Significant ELM-to-ELM variations in observed maximum divertor target IR temperature during nominally identical ELMs are investigated with models for C emission from micron-scale dust particles. [1] M Groth, M Fenstermacher et al J Nucl Mater 2003, [2] M Wade, K Burrell et al PSI-16
Source Code Plagiarism--A Student Perspective
ERIC Educational Resources Information Center
Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.
2011-01-01
This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…
Saint: a lightweight integration environment for model annotation.
Lister, Allyson L; Pocock, Matthew; Taschuk, Morgan; Wipat, Anil
2009-11-15
Saint is a web application which provides a lightweight annotation integration environment for quantitative biological models. The system enables modellers to rapidly mark up models with biological information derived from a range of data sources. Saint is freely available for use on the web at http://www.cisban.ac.uk/saint. The web application is implemented in Google Web Toolkit and Tomcat, with all major browsers supported. The Java source code is freely available for download at http://saint-annotate.sourceforge.net. The Saint web server requires an installation of libSBML and has been tested on Linux (32-bit Ubuntu 8.10 and 9.04).
Time-dependent jet flow and noise computations
NASA Technical Reports Server (NTRS)
Berman, C. H.; Ramos, J. I.; Karniadakis, G. E.; Orszag, S. A.
1990-01-01
Methods for computing jet turbulence noise based on the time-dependent solution of Lighthill's (1952) differential equation are demonstrated. A key element in this approach is a flow code for solving the time-dependent Navier-Stokes equations at relatively high Reynolds numbers. Jet flow results at Re = 10,000 are presented here. This code combines a computationally efficient spectral element technique and a new self-consistent turbulence subgrid model to supply values for Lighthill's turbulence noise source tensor.
MELCOR computer code manuals: Primer and user`s guides, Version 1.8.3 September 1994. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the US Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users` Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
NASA Astrophysics Data System (ADS)
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
NASA Astrophysics Data System (ADS)
Watanabe, Yukinobu; Kin, Tadahiro; Araki, Shouhei; Nakayama, Shinsuke; Iwamoto, Osamu
2017-09-01
A comprehensive research program on deuteron nuclear data motivated by development of accelerator-based neutron sources is being executed. It is composed of measurements of neutron and gamma-ray yields and production cross sections, modelling of deuteron-induced reactions and code development, nuclear data evaluation and benchmark test, and its application to medical radioisotopes production. The goal of this program is to develop a state-of-the-art deuteron nuclear data library up to 200 MeV which will be useful for the design of future (d,xn) neutron sources. The current status and future plan are reviewed.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
Hybrid concatenated codes and iterative decoding
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)
2000-01-01
Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.
Design Pattern Mining Using Distributed Learning Automata and DNA Sequence Alignment
Esmaeilpour, Mansour; Naderifar, Vahideh; Shukur, Zarina
2014-01-01
Context Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem. Objective This paper describes a new method for pattern mining, isolating design patterns and relationship between them; and a related tool, DLA-DNA for all implemented pattern and all projects used for evaluation. DLA-DNA achieves acceptable precision and recall instead of other evaluated tools based on distributed learning automata (DLA) and deoxyribonucleic acid (DNA) sequences alignment. Method The proposed method mines structural design patterns in the object oriented source code and extracts the strong and weak relationships between them, enabling analyzers and programmers to determine the dependency rate of each object, component, and other section of the code for parameter passing and modular programming. The proposed model can detect design patterns better that available other tools those are Pinot, PTIDEJ and DPJF; and the strengths of their relationships. Results The result demonstrate that whenever the source code is build standard and non-standard, based on the design patterns, then the result of the proposed method is near to DPJF and better that Pinot and PTIDEJ. The proposed model is tested on the several source codes and is compared with other related models and available tools those the results show the precision and recall of the proposed method, averagely 20% and 9.6% are more than Pinot, 27% and 31% are more than PTIDEJ and 3.3% and 2% are more than DPJF respectively. Conclusion The primary idea of the proposed method is organized in two following steps: the first step, elemental design patterns are identified, while at the second step, is composed to recognize actual design patterns. PMID:25243670
Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA
NASA Astrophysics Data System (ADS)
Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude
2017-05-01
When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.
Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C
2016-06-01
Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
DLRS: gene tree evolution in light of a species tree.
Sjöstrand, Joel; Sennblad, Bengt; Arvestad, Lars; Lagergren, Jens
2012-11-15
PrIME-DLRS (or colloquially: 'Delirious') is a phylogenetic software tool to simultaneously infer and reconcile a gene tree given a species tree. It accounts for duplication and loss events, a relaxed molecular clock and is intended for the study of homologous gene families, for example in a comparative genomics setting involving multiple species. PrIME-DLRS uses a Bayesian MCMC framework, where the input is a known species tree with divergence times and a multiple sequence alignment, and the output is a posterior distribution over gene trees and model parameters. PrIME-DLRS is available for Java SE 6+ under the New BSD License, and JAR files and source code can be downloaded from http://code.google.com/p/jprime/. There is also a slightly older C++ version available as a binary package for Ubuntu, with download instructions at http://prime.sbc.su.se. The C++ source code is available upon request. joel.sjostrand@scilifelab.se or jens.lagergren@scilifelab.se. PrIME-DLRS is based on a sound probabilistic model (Åkerborg et al., 2009) and has been thoroughly validated on synthetic and biological datasets (Supplementary Material online).
TOWARD THE DEVELOPMENT OF A CONSENSUS MATERIALS DATABASE FOR PRESSURE TECHNOLGY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, Robert W; Ren, Weiju
The ASME construction code books specify materials and fabrication procedures that are acceptable for pressure technology applications. However, with few exceptions, the materials properties provided in the ASME code books provide no statistics or other information pertaining to material variability. Such information is central to the prediction and prevention of failure events. Many sources of materials data exist that provide variability information but such sources do not necessarily represent a consensus of experts with respect to the reported trends that are represented. Such a need has been identified by the ASME Standards Technology, LLC and initial steps have been takenmore » to address these needs: however, these steps are limited to project-specific applications only, such as the joint DOE-ASME project on materials for Generation IV nuclear reactors. In contrast to light-water reactor technology, the experience base for the Generation IV nuclear reactors is somewhat lacking and heavy reliance must be placed on model development and predictive capability. The database for model development is being assembled and includes existing code alloys such as alloy 800H and 9Cr-1Mo-V steel. Ownership and use rights are potential barriers that must be addressed.« less
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
GRAYSKY-A new gamma-ray skyshine code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witts, D.J.; Twardowski, T.; Watmough, M.H.
1993-01-01
This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less
High-resolution, far-infrared observations of NGC 2071
NASA Technical Reports Server (NTRS)
Butner, Harold M.; Evans, Neal J., II; Harvey, Paul M.; Mundy, Lee G.; Natta, Antonella
1990-01-01
The far-IR emission of the visible reflection nebula NGC 2071 has been resolved at both 50 and 100 microns along several directions. The observations reveal an extended, roughly spherical source with an average source diameter of about 12 arcsec or 4700 AU at 50 microns and about 16 arcsec or 6200 AU at 100 microns. The source is modeled using a radiative transport code to match scans of the source and previous photometry. The luminosity of the source is 520 solar at a distance of 390 pc. The optical depth at 100 microns is 0.20, implying a mass of 1.2-10 solar within a radius of 5900 AU. The density gradient is in good agreement with theoretical models for infalling envelopes around protostars and in reasonable agreement with other observational determinations.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
Simulation of Shear Alfvén Waves in LAPD using the BOUT++ code
NASA Astrophysics Data System (ADS)
Wei, Di; Friedman, B.; Carter, T. A.; Umansky, M. V.
2011-10-01
The linear and nonlinear physics of shear Alfvén waves is investigated using the 3D Braginskii fluid code BOUT++. The code has been verified against analytical calculations for the dispersion of kinetic and inertial Alfvén waves. Various mechanisms for forcing Alfvén waves in the code are explored, including introducing localized current sources similar to physical antennas used in experiments. Using this foundation, the code is used to model nonlinear interactions among shear Alfvén waves in a cylindrical magnetized plasma, such as that found in the Large Plasma Device (LAPD) at UCLA. In the future this investigation will allow for examination of the nonlinear interactions between shear Alfvén waves in both laboratory and space plasmas in order to compare to predictions of MHD turbulence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zehtabian, M; Zaker, N; Sina, S
2015-06-15
Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less
NASA Astrophysics Data System (ADS)
Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.
2018-03-01
In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.
Nuclear Physics Meets the Sources of the Ultra-High Energy Cosmic Rays.
Boncioli, Denise; Fedynitch, Anatoli; Winter, Walter
2017-07-07
The determination of the injection composition of cosmic ray nuclei within astrophysical sources requires sufficiently accurate descriptions of the source physics and the propagation - apart from controlling astrophysical uncertainties. We therefore study the implications of nuclear data and models for cosmic ray astrophysics, which involves the photo-disintegration of nuclei up to iron in astrophysical environments. We demonstrate that the impact of nuclear model uncertainties is potentially larger in environments with non-thermal radiation fields than in the cosmic microwave background. We also study the impact of nuclear models on the nuclear cascade in a gamma-ray burst radiation field, simulated at a level of complexity comparable to the most precise cosmic ray propagation code. We conclude with an isotope chart describing which information is in principle necessary to describe nuclear interactions in cosmic ray sources and propagation.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Peterson, M.D.; Mueller, C.S.
2011-01-01
The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.
Toward an automated parallel computing environment for geosciences
NASA Astrophysics Data System (ADS)
Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping
2007-08-01
Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.
NASA Astrophysics Data System (ADS)
Zhirkin, A. V.; Alekseev, P. N.; Batyaev, V. F.; Gurevich, M. I.; Dudnikov, A. A.; Kuteev, B. V.; Pavlov, K. V.; Titarenko, Yu. E.; Titarenko, A. Yu.
2017-06-01
In this report the calculation accuracy requirements of the main parameters of the fusion neutron source, and the thermonuclear blankets with a DT fusion power of more than 10 MW, are formulated. To conduct the benchmark experiments the technical documentation and calculation models were developed for two blanket micro-models: the molten salt and the heavy water solid-state blankets. The calculations of the neutron spectra, and 37 dosimetric reaction rates that are widely used for the registration of thermal, resonance and threshold (0.25-13.45 MeV) neutrons, were performed for each blanket micro-model. The MCNP code and the neutron data library ENDF/B-VII were used for the calculations. All the calculations were performed for two kinds of neutron source: source I is the fusion source, source II is the source of neutrons generated by the 7Li target irradiated by protons with energy 24.6 MeV. The spectral indexes ratios were calculated to describe the spectrum variations from different neutron sources. The obtained results demonstrate the advantage of using the fusion neutron source in future experiments.
1988-05-01
Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
Assessment of Current Jet Noise Prediction Capabilities
NASA Technical Reports Server (NTRS)
Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas
2008-01-01
An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.
The Astrophysics Source Code Library by the numbers
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein
2018-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.
Astrophysics Source Code Library: Incite to Cite!
NASA Astrophysics Data System (ADS)
DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.
2014-05-01
The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.
Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.
2013-10-01
The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
Nuclear Resonance Fluorescence for Materials Assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quiter, Brian J.; Ludewigt, Bernhard; Mozin, Vladimir
This paper discusses the use of nuclear resonance fluorescence (NRF) techniques for the isotopic and quantitative assaying of radioactive material. Potential applications include age-dating of an unknown radioactive source, pre- and post-detonation nuclear forensics, and safeguards for nuclear fuel cycles Examples of age-dating a strong radioactive source and assaying a spent fuel pin are discussed. The modeling work has ben performed with the Monte Carlo radiation transport computer code MCNPX, and the capability to simulate NRF has bee added to the code. Discussed are the limitations in MCNPX?s photon transport physics for accurately describing photon scattering processes that are importantmore » contributions to the background and impact the applicability of the NRF assay technique.« less
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
mdFoam+: Advanced molecular dynamics in OpenFOAM
NASA Astrophysics Data System (ADS)
Longshaw, S. M.; Borg, M. K.; Ramisetti, S. B.; Zhang, J.; Lockerby, D. A.; Emerson, D. R.; Reese, J. M.
2018-03-01
This paper introduces mdFoam+, which is an MPI parallelised molecular dynamics (MD) solver implemented entirely within the OpenFOAM software framework. It is open-source and released under the same GNU General Public License (GPL) as OpenFOAM. The source code is released as a publicly open software repository that includes detailed documentation and tutorial cases. Since mdFoam+ is designed entirely within the OpenFOAM C++ object-oriented framework, it inherits a number of key features. The code is designed for extensibility and flexibility, so it is aimed first and foremost as an MD research tool, in which new models and test cases can be developed and tested rapidly. Implementing mdFoam+ in OpenFOAM also enables easier development of hybrid methods that couple MD with continuum-based solvers. Setting up MD cases follows the standard OpenFOAM format, as mdFoam+ also relies upon the OpenFOAM dictionary-based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of an MD simulation is not typical of most OpenFOAM applications. Results show that mdFoam+ compares well to another well-known MD code (e.g. LAMMPS) in terms of benchmark problems, although it also has additional functionality that does not exist in other open-source MD codes.
Update and evaluation of decay data for spent nuclear fuel analyses
NASA Astrophysics Data System (ADS)
Simeonov, Teodosi; Wemple, Charles
2017-09-01
Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.
Building and Vegetation Rasterization for the Three-dimensional Wind Field (3DWF) Model
2010-12-01
Maps API. By design, JavaScript limits access to local resources. This is done to protect against the execution of malicious code. However, ActiveX ...to only use these types of objects ( ActiveX or XPCOM) from a trusted source in order to minimize the exposure of a computer system to malware...Microsoft ActiveX . There is also a need to restructure and rethink the implementation of the JavaScript code. It would be desirable to save the digitized
Travelling Wave Concepts for the Modeling and Control of Space Structures
1988-01-31
ZIP Code) 77 Massachusetts Avenue AFOSR / L \\\\ 0 Cambridge, MA 02139 Bolling Air Force Base , DC 20332-6448 8a. NAME OF FUNDING/SPONSORING 8b OFFICE...FQ8671-88-00398 8c. ADDRESS (City, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS Building 410 PROGRAM PROJECT tASK WORK UNIT Bolling Air Force Base ...at the Jet Propulsion Laboratories, and is writing two further papers for journal publication based on his PhD dissertation. In the winter of 1987
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
NASA Astrophysics Data System (ADS)
Yang, Xinmai; Cleveland, Robin O.
2005-01-01
A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging. .
Yang, Xinmai; Cleveland, Robin O
2005-01-01
A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging.
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
Practices in source code sharing in astrophysics
NASA Astrophysics Data System (ADS)
Shamir, Lior; Wallin, John F.; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J.; Mink, Jessica; Hanisch, Robert J.; DuPrie, Kimberly
2013-02-01
While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make computational astronomy methods more available to other researchers who wish to apply them to their data.
Analyzing and modeling gravity and magnetic anomalies using the SPHERE program and Magsat data
NASA Technical Reports Server (NTRS)
Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B. (Principal Investigator)
1981-01-01
Computer codes were completed, tested, and documented for analyzing magnetic anomaly vector components by equivalent point dipole inversion. The codes are intended for use in inverting the magnetic anomaly due to a spherical prism in a horizontal geomagnetic field and for recomputing the anomaly in a vertical geomagnetic field. Modeling of potential fields at satellite elevations that are derived from three dimensional sources by program SPHERE was made significantly more efficient by improving the input routines. A preliminary model of the Andean subduction zone was used to compute the anomaly at satellite elevations using both actual geomagnetic parameters and vertical polarization. Program SPHERE is also being used to calculate satellite level magnetic and gravity anomalies from the Amazon River Aulacogen.
NASA Astrophysics Data System (ADS)
Lin, J. W. B.
2015-12-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiledlanguages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionalityavailable with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Pythonto create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran, to optimize model performance but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
Palkowski, Marek; Bielecki, Wlodzimierz
2017-06-02
RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.
NASA Astrophysics Data System (ADS)
Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.
2012-12-01
Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006) is a collection of various sample programs using ``SPML''. These sample programs provide the basekit for simple numerical experiments of geophysical fluid dynamics. For example, SPMODEL includes 1-dimensional KdV equation model, 2-dimensional barotropic, shallow water, Boussinesq models, 3-dimensional MHD dynamo models in rotating spherical shells. These models are written in the common style in harmony with SPML functions. ``Deepconv'' (Sugiyama et al., 2010) and ``Dcpam'' are a cloud resolving model and a general circulation model for the purpose of applications to the planetary atmospheres, respectively. ``Deepconv'' includes several physical processes appropriate for simulations of Jupiter and Mars atmospheres, while ``Dcpam'' does for simulations of Earth, Mars, and Venus-like atmospheres. ``Rdoc-f95'' is a automatic generator of reference manuals of Fortran90/95 programs, which is an extension of ruby documentation tool kit ``rdoc''. It analyzes dependency of modules, functions, and subroutines in the multiple program source codes. At the same time, it can list up the namelist variables in the programs.
Technical Support Document for Version 3.4.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2007-09-14
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.« less
Description and availability of the SMARTS spectral model for photovoltaic applications
NASA Astrophysics Data System (ADS)
Myers, Daryl R.; Gueymard, Christian A.
2004-11-01
Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.
Macroeconomic Activity Module - NEMS Documentation
2016-01-01
Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, M.E.
1997-12-05
This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less
Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.
Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M
2009-10-01
Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.
qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2008-10-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2009-02-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
Coupling geodynamic with thermodynamic modelling for reconstructions of magmatic systems
NASA Astrophysics Data System (ADS)
Rummel, Lisa; Kaus, Boris J. P.; White, Richard
2016-04-01
Coupling geodynamic with petrological models is fundamental for understanding magmatic systems from the melting source in the mantle to the point of magma crystallisation in the upper crust. Most geodynamic codes use very simplified petrological models consisting of a single, fixed, chemistry. Here, we develop a method to better track the petrological evolution of the source rock and corresponding volcanic and plutonic rocks by combining a geodynamic code with a thermodynamic model for magma generation and evolution. For the geodynamic modelling a finite element code (MVEP2) solves the conservation of mass, momentum and energy equations. The thermodynamic modelling of phase equilibria in magmatic systems is performed with pMELTS for mantle-like bulk compositions. The thermodynamic dependent properties calculated by pMELTS are density, melt fraction and the composition of the liquid and solid phase in the chemical system: SiO2-TiO2-Al2O3-Fe2O3-Cr2O3-FeO-MgO-CaO-Na2O-K2O-P2O5-H2O. In order to take into account the chemical depletion of the source rock with increasing melt extraction events, calculation of phase diagrams is performed in two steps: 1) With an initial rock composition density, melt fraction as well as liquid and solid composition are computed over the full upper mantle P-T range. 2) Once the residual rock composition (equivalent to the solid composition after melt extraction) is significantly different from the initial rock composition and the melt fraction is lower than a critical value, the residual composition is used for next calculations with pMELTS. The implementation of several melt extraction events take the change in chemistry into account until the solidus is shifted to such high temperatures that the rock cannot be molten anymore under upper mantle conditions. An advantage of this approach is that we can track the change of melt chemistry with time, which can be compared with natural constraints. In the thermo-mechanical code the thermodynamic dependent properties from pre-computed phase diagrams are carried by each particle using marker-in-cell method . Thus the physical and chemical properties can change locally as a function of previous melt extraction events, pressure and temperature conditions. After each melt extraction event, the residual rock composition is compared with the bulk composition of previous computed phase diagrams, so that the used phase diagram is replaced by the phase diagram with the closest bulk chemistry. In the thermo-mechanical code, the melt is extracted directly to the surface as volcanites and within the crust as plutonites. The density of the crust and new generated crust is calculated with the thermodynamic modelling tool Perple_X. We have investigated the influence of several input parameters on the magma composition to compare it with real rock samples from Eifel (West-Germany). In order to take the very inhomogeneous chemistry of European mantle into account, we include not only primitive mantle but also metasomatised mantle fragments in the melting source of a plume (Eifel plume).
Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao
2017-01-01
Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934
Study on induced radioactivity of China Spallation Neutron Source
NASA Astrophysics Data System (ADS)
Wu, Qing-Biao; Wang, Qing-Bin; Wu, Jing-Min; Ma, Zhong-Jian
2011-06-01
China Spallation Neutron Source (CSNS) is the first High Energy Intense Proton Accelerator planned to be constructed in China during the State Eleventh Five-Year Plan period, whose induced radioactivity is very important for occupational disease hazard assessment and environmental impact assessment. Adopting the FLUKA code, the authors have constructed a cylinder-tunnel geometric model and a line-source sampling physical model, deduced proper formulas to calculate air activation, and analyzed various issues with regard to the activation of different tunnel parts. The results show that the environmental impact resulting from induced activation is negligible, whereas the residual radiation in the tunnels has a great influence on maintenance personnel, so strict measures should be adopted.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
The mathematical theory of signal processing and compression-designs
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2006-05-01
The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.
Quantum Mechanical Modeling of Ballistic MOSFETs
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan (Technical Monitor)
2001-01-01
The objective of this project was to develop theory, approximations, and computer code to model quasi 1D structures such as nanotubes, DNA, and MOSFETs: (1) Nanotubes: Influence of defects on ballistic transport, electro-mechanical properties, and metal-nanotube coupling; (2) DNA: Model electron transfer (biochemistry) and transport experiments, and sequence dependence of conductance; and (3) MOSFETs: 2D doping profiles, polysilicon depletion, source to drain and gate tunneling, understand ballistic limit.
NASA Astrophysics Data System (ADS)
Alexander, K.; Easterbrook, S. M.
2015-01-01
We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.
NASA Astrophysics Data System (ADS)
Alexander, K.; Easterbrook, S. M.
2015-04-01
We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.
Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.
1993-07-01
The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less
Algorithms and physical parameters involved in the calculation of model stellar atmospheres
NASA Astrophysics Data System (ADS)
Merlo, D. C.
This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2012-01-01
Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.
Context-Sensitive Ethics in School Psychology
ERIC Educational Resources Information Center
Lasser, Jon; Klose, Laurie McGarry; Robillard, Rachel
2013-01-01
Ethical codes and licensing rules provide foundational guidance for practicing school psychologists, but these sources fall short in their capacity to facilitate effective decision-making. When faced with ethical dilemmas, school psychologists can turn to decision-making models, but step-wise decision trees frequently lack the situation…
Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walton, C C; Gilmer, G H; Wemhoff, A P
2007-09-26
This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magneticmore » field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole.« less
The openEHR Java reference implementation project.
Chen, Rong; Klein, Gunnar
2007-01-01
The openEHR foundation has developed an innovative design for interoperable and future-proof Electronic Health Record (EHR) systems based on a dual model approach with a stable reference information model complemented by archetypes for specific clinical purposes.A team from Sweden has implemented all the stable specifications in the Java programming language and donated the source code to the openEHR foundation. It was adopted as the openEHR Java Reference Implementation in March 2005 and released under open source licenses. This encourages early EHR implementation projects around the world and a number of groups have already started to use this code. The early Java implementation experience has also led to the publication of the openEHR Java Implementation Technology Specification. A number of design changes to the specifications and important minor corrections have been directly initiated by the implementation project over the last two years. The Java Implementation has been important for the validation and improvement of the openEHR design specifications and provides building blocks for future EHR systems.
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Radiolytic Model for Chemical Composition of Europa's Atmosphere and Surface
NASA Technical Reports Server (NTRS)
Cooper, John F.
2004-01-01
The overall objective of the present effort is to produce models for major and selected minor components of Europa s neutral atmosphere in 1-D versus altitude and in 2-D versus altitude and longitude or latitude. A 3-D model versus all three coordinates (alt, long, lat) will be studied but development on this is at present limited by computing facilities available to the investigation team. In this first year we have focused on 1-D modeling with Co-I Valery Shematovich s Direct Simulation Monte Carlo (DSMC) code for water group species (H2O, O2, O, OH) and on 2-D with Co-I Mau Wong's version of a similar code for O2, O, CO, CO2, and Na. Surface source rates of H2O and O2 from sputtering and radiolysis are used in the 1-D model, while observations for CO2 at the Europa surface and Na detected in a neutral cloud ejected from Europa are used, along with the O2 sputtering rate, to constrain source rates in the 2-D version. With these separate approaches we are investigating a range of processes important to eventual implementation of a comprehensive 3-D atmospheric model which could be used to understand present observations and develop science requirements for future observations, e.g. from Earth and in Europa orbit. Within the second year we expect to merge the full water group calculations into the 2-D version of the DSMC code which can then be extended to 3-D, pending availability of computing resources. Another important goal in the second year would be the inclusion of sulk and its more volatile oxides (SO, SO2).
An Open-source Community Web Site To Support Ground-Water Model Testing
NASA Astrophysics Data System (ADS)
Kraemer, S. R.; Bakker, M.; Craig, J. R.
2007-12-01
A community wiki wiki web site has been created as a resource to support ground-water model development and testing. The Groundwater Gourmet wiki is a repository for user supplied analytical and numerical recipes, howtos, and examples. Members are encouraged to submit analytical solutions, including source code and documentation. A diversity of code snippets are sought in a variety of languages, including Fortran, C, C++, Matlab, Python. In the spirit of a wiki, all contributions may be edited and altered by other users, and open source licensing is promoted. Community accepted contributions are graduated into the library of analytic solutions and organized into either a Strack (Groundwater Mechanics, 1989) or Bruggeman (Analytical Solutions of Geohydrological Problems, 1999) classification. The examples section of the wiki are meant to include laboratory experiments (e.g., Hele Shaw), classical benchmark problems (e.g., Henry Problem), and controlled field experiments (e.g., Borden landfill and Cape Cod tracer tests). Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-07-09
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-01-01
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616
Advanced Pellet-Cladding Interaction Modeling using the US DOE CASL Fuel Performance Code: Peregrine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, Robert O.; Capps, Nathan A.; Sunderland, Dion J.
The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermo-mechanical-chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code thatmore » is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.« less
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Hammond, G. E.
2017-12-01
Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A
Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.
Korecki, P; Roszczynialski, T P; Sowa, K M
2015-04-06
In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
JGrass-NewAge hydrological system: an open-source platform for the replicability of science.
NASA Astrophysics Data System (ADS)
Bancheri, Marialaura; Serafin, Francesco; Formetta, Giuseppe; Rigon, Riccardo; David, Olaf
2017-04-01
JGrass-NewAge is an open source semi-distributed hydrological modelling system. It is based on the object modelling framework (OMS version 3), on the JGrasstools and on the Geotools. OMS3 allows to create independent packages of software which can be connected at run-time in a working modelling solution. These components are available as library/dependency or as repository to fork in order to add further features. Different tools are adopted to make easier the integration, the interoperability and the use of each package. Most of the components are Gradle integrated, since it represents the state-of-art of the building systems, especially for Java projects. The continuous integration is a further layer between local source code (client-side) and remote repository (server-side) and ensures the building and the testing of the source code at each commit. Finally, the use of Zenodo makes the code hosted in GitHub unique, citable and traceable, with a defined DOI. Following the previous standards, each part of the hydrological cycle is implemented in JGrass-NewAge as a component that can be selected, adopted, and connected to obtain a user "customized" hydrological model. A variety of modelling solutions are possible, allowing a complete hydrological analysis. Moreover, thanks to the JGrasstools and the Geotools, the visualization of the data and of the results using a selected GIS is possible. After the geomorphological analysis of the watershed, the spatial interpolation of the meteorological inputs can be performed using both deterministic (IDW) and geostatistic (Kriging) algorithms. For the radiation balance, the shortwave and longwave radiation can be estimated, which are, in turn, inputs for the simulation of the evapotranspiration, according to Priestly-Taylor and Penman-Monteith formulas. Three degree-day models are implemented for the snow melting and SWE. The runoff production can be simulated using two different components, "Adige" and "Embedded Reservoirs". The travel time theory has recently been integrated for a coupled analysis of the solute transport. Eventually, each component can be connected to the different calibration tools such as LUCA and PSO. Further information about the actual implementation can be found at (https://github.com/geoframecomponents), while the OMS projects with the examples, data and results are available at (https://github.com/GEOframeOMSProjects).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J
The purpose of this model was to facilitate the design of a control system that uses fine grained control of residential and small commercial HVAC loads to counterbalance voltage swings caused by intermittent solar power sources (e.g., rooftop panels) installed in that distribution circuit. Included is the source code and pre-compiled 64 bit dll for adding building HVAC loads to an OpenDSS distribution circuit. As written, the Makefile assumes you are using the Microsoft C++ development tools.
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Branson: A Mini-App for Studying Parallel IMC, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Alex
This code solves the gray thermal radiative transfer (TRT) equations in parallel using simple opacities and Cartesian meshes. Although Branson solves the TRT equations it is not designed to model radiation transport: Branson contains simple physics and does not have a multigroup treatment, nor can it use physical material data. The opacities have are simple polynomials in temperature there is a limited ability to specify complex geometries and sources. Branson was designed only to capture the computational demands of production IMC codes, especially in large parallel runs. It was also intended to foster collaboration with vendors, universities and other DOEmore » partners. Branson is similar in character to the neutron transport proxy-app Quicksilver from LLNL, which was recently open-sourced.« less
Advanced capabilities for materials modelling with Quantum ESPRESSO
NASA Astrophysics Data System (ADS)
Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.
2017-11-01
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S
2017-10-24
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano
2017-09-27
Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.
Modeling activities on the negative-ion-based Neutral Beam Injectors of the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostinetti, P.; Antoni, V.; Chitarin, G.
2011-09-26
At the National Institute for Fusion Science (NIFS) large-scaled negative ion sources have been widely used for the Neutral Beam Injectors (NBIs) mounted on the Large Helical Device (LHD), which is the world-largest superconducting helical system. These injectors have achieved outstanding performances in terms of beam energy, negative-ion current and optics, and represent a reference for the development of heating and current drive NBIs for ITER.In the framework of the support activities for the ITER NBIs, the PRIMA test facility, which includes a RF-drive ion source with 100 keV accelerator (SPIDER) and a complete 1 MeV Neutral Beam system (MITICA)more » is under construction at Consorzio RFX in Padova.An experimental validation of the codes has been undertaken in order to prove the accuracy of the simulations and the soundness of the SPIDER and MITICA design. To this purpose, the whole set of codes have been applied to the LHD NBIs in a joint activity between Consorzio RFX and NIFS, with the goal of comparing and benchmarking the codes with the experimental data. A description of these modeling activities and a discussion of the main results obtained are reported in this paper.« less
NASA Technical Reports Server (NTRS)
Youngblut, C.
1984-01-01
Orography and geographically fixed heat sources which force a zonally asymmetric motion field are examined. An extensive space-time spectral analysis of the GLAS climate model (D130) response and observations are compared. An updated version of the model (D150) showed a remarkable improvement in the simulation of the standing waves. The main differences in the model code are an improved boundary layer flux computation and a more realistic specification of the global boundary conditions.
NASA Astrophysics Data System (ADS)
Gao, Shanghua; Fu, Guangyu; Liu, Tai; Zhang, Guoqing
2017-03-01
Tanaka et al. (Geophys J Int 164:273-289, 2006, Geophys J Int 170:1031-1052, 2007) proposed the spherical dislocation theory (SDT) in a spherically symmetric, self-gravitating visco-elastic earth model. However, to date there have been no reports on easily adopted, widely used software that utilizes Tanaka's theory. In this study we introduce a new code to compute post-seismic deformations (PSD), including displacements as well as Geoid and gravity changes, caused by a seismic source at any position. This new code is based on the above-mentioned SDT. The code consists of two parts. The first part is the numerical frame of the dislocation Green function (DGF), which contains a set of two-dimensional discrete numerical frames of DGFs on a symmetric earth model. The second part is an integration function, which performs bi-quadratic spline interpolation operations on the frame of DGFs. The inputs are the information on the seismic fault models and the information on the observation points. After the user prepares the inputs in a file with given format, the code will automatically compute the PSD. As an example, we use the new code to calculate the co-seismic displacements caused by the Tohoku-Oki Mw 9.0 earthquake. We compare the result with observations and the result from a full-elastic SDT, and we found that the Root Mean Square error between the calculated and observed results is 7.4 cm. This verifies the suitability of our new code. Finally, we discuss several issues that require attention when using the code, which should be helpful for users.
Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
40 CFR Appendix A to Subpart A of... - Tables
Code of Federal Regulations, 2010 CFR
2010-07-01
... phone number ✓ ✓ (6) FIPS code ✓ ✓ (7) Facility ID codes ✓ ✓ (8) Unit ID code ✓ ✓ (9) Process ID code... for Reporting on Emissions From Nonpoint Sources and Nonroad Mobile Sources, Where Required by 40 CFR... start date ✓ ✓ (3) Inventory end date ✓ ✓ (4) Contact name ✓ ✓ (5) Contact phone number ✓ ✓ (6) FIPS...
SOURCELESS STARTUP. A MACHINE CODE FOR COMPUTING LOW-SOURCE REACTOR STARTUPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacMillan, D.B.
1960-06-01
>A revision to the sourceless start-up code is presented. The code solves a system of differential equations encountered in computing the probability distribution of activity at an observed power level during reactor start-up from a very low source level. (J.R.D.)
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
Multipacting studies in elliptic SRF cavities
NASA Astrophysics Data System (ADS)
Prakash, Ram; Jana, Arup Ratan; Kumar, Vinit
2017-09-01
Multipacting is a resonant process, where the number of unwanted electrons resulting from a parasitic discharge rapidly grows to a larger value at some specific locations in a radio-frequency cavity. This results in a degradation of the cavity performance indicators (e.g. the quality factor Q and the maximum achievable accelerating gradient Eacc), and in the case of a superconducting radiofrequency (SRF) cavity, it leads to a quenching of superconductivity. Numerical simulations are essential to pre-empt the possibility of multipacting in SRF cavities, such that its design can be suitably refined to avoid this performance limiting phenomenon. Readily available computer codes (e.g.FishPact, MultiPac,CST-PICetc.) are widely used to simulate the phenomenon of multipacting in such cases. Most of the contemporary two dimensional (2D) codes such as FishPact, MultiPacetc. are unable to detect the multipacting in elliptic cavities because they use a simplistic secondary emission model, where it is assumed that all the secondary electrons are emitted with same energy. Some three-dimensional (3D) codes such as CST-PIC, which use a more realistic secondary emission model (Furman model) by following a probability distribution for the emission energy of secondary electrons, are able to correctly predict the occurrence of multipacting. These 3D codes however require large data handling and are slower than the 2D codes. In this paper, we report a detailed analysis of the multipacting phenomenon in elliptic SRF cavities and development of a 2D code to numerically simulate this phenomenon by employing the Furman model to simulate the secondary emission process. Since our code is 2D, it is faster than the 3D codes. It is however as accurate as the contemporary 3D codes since it uses the Furman model for secondary emission. We have also explored the possibility to further simplify the Furman model, which enables us to quickly estimate the growth rate of multipacting without performing any multi-particle simulation. This methodology has been employed along with computer code for the detailed analysis of multipacting in βg = 0 . 61 and βg = 0 . 9, 650 MHz elliptic SRF cavities that we have recently designed for the medium and high energy section of the proposed Indian Spallation Neutron Source (ISNS) project.
Recent Improvements of Particle and Heavy Ion Transport code System: PHITS
NASA Astrophysics Data System (ADS)
Sato, Tatsuhiko; Niita, Koji; Iwamoto, Yosuke; Hashimoto, Shintaro; Ogawa, Tatsuhiko; Furuta, Takuya; Abe, Shin-ichiro; Kai, Takeshi; Matsuda, Norihiro; Okumura, Keisuke; Kai, Tetsuya; Iwase, Hiroshi; Sihver, Lembit
2017-09-01
The Particle and Heavy Ion Transport code System, PHITS, has been developed under the collaboration of several research institutes in Japan and Europe. This system can simulate the transport of most particles with energy levels up to 1 TeV (per nucleon for ion) using different nuclear reaction models and data libraries. More than 2,500 registered researchers and technicians have used this system for various applications such as accelerator design, radiation shielding and protection, medical physics, and space- and geo-sciences. This paper summarizes the physics models and functions recently implemented in PHITS, between versions 2.52 and 2.88, especially those related to source generation useful for simulating brachytherapy and internal exposures of radioisotopes.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel
2015-04-01
We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.
GIS-MODFLOW: Ein kleines OpenSource-Werkzeug zur Anbindung von GIS-Daten an MODFLOW
NASA Astrophysics Data System (ADS)
Gossel, Wolfgang
2013-06-01
The numerical model MODFLOW (Harbaugh 2005) is an efficient and up-to-date tool for groundwater flow modelling. On the other hand, Geo-Information-Systems (GIS) provide useful tools for data preparation and visualization that can also be incorporated in numerical groundwater modelling. An interface between both would therefore be useful for many hydrogeological investigations. To date, several integrated stand-alone tools have been developed that rely on MODFLOW, MODPATH and transport modelling tools. Simultaneously, several open source-GIS codes were developed to improve functionality and ease of use. These GIS tools can be used as pre- and post-processors of the numerical model MODFLOW via a suitable interface. Here we present GIS-MODFLOW as an open-source tool that provides a new universal interface by using the ESRI ASCII GRID data format that can be converted into MODFLOW input data. This tool can also treat MODFLOW results. Such a combination of MODFLOW and open-source GIS opens new possibilities to render groundwater flow modelling, and simulation results, available to larger circles of hydrogeologists.
Project MANTIS: A MANTle Induction Simulator for coupling geodynamic and electromagnetic modeling
NASA Astrophysics Data System (ADS)
Weiss, C. J.
2009-12-01
A key component to testing geodynamic hypotheses resulting from the 3D mantle convection simulations is the ability to easily translate the predicted physiochemical state to the model space relevant for an independent geophysical observation, such as earth's seismic, geodetic or electromagnetic response. In this contribution a new parallel code for simulating low-frequency, global-scale electromagnetic induction phenomena is introduced that has the same Earth discretization as the popular CitcomS mantle convection code. Hence, projection of the CitcomS model into the model space of electrical conductivity is greatly simplified, and focuses solely on the node-to-node, physics-based relationship between these Earth parameters without the need for "upscaling", "downscaling", averaging or harmonizing with some other model basis such as spherical harmonics. Preliminary performance tests of the MANTIS code on shared and distributed memory parallel compute platforms shows favorable scaling (>70% efficiency) for up to 500 processors. As with CitcomS, an OpenDX visualization widget (VISMAN) is also provided for 3D rendering and interactive interrogation of model results. Details of the MANTIS code will be briefly discussed here, focusing on compatibility with CitcomS modeling, as will be preliminary results in which the electromagnetic response of a CitcomS model is evaluated. VISMAN rendering of electrical tomography-derived electrical conductivity model overlain by an a 1x1 deg crustal conductivity map. Grey scale represents the log_10 magnitude of conductivity [S/m]. Arrows are horiztonal components of a hypothetical magnetospheric source field used to electromagnetically excite the conductivity model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
flexCloud: Deployment of the FLEXPART Atmospheric Transport Model as a Cloud SaaS Environment
NASA Astrophysics Data System (ADS)
Morton, Don; Arnold, Dèlia
2014-05-01
FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. We have used it to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides. Additionally, FLEXPART may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced user. Our interest is in moving scientific modeling and simulation activities from site-specific clusters and supercomputers to a cloud model as a service paradigm. Choosing FLEXPART for our prototyping, our vision is to construct customised IaaS images containing fully-compiled and configured FLEXPART codes, including pre-processing, execution and postprocessing components. In addition, with the inclusion of a small web server in the image, we introduce a web-accessible graphical user interface that drives the system. A further initiative being pursued is the deployment of multiple, simultaneous FLEXPART ensembles in the cloud. A single front-end web interface is used to define the ensemble members, and separate cloud instances are launched, on-demand, to run the individual models and to conglomerate the outputs into a unified display. The outcome of this work is a Software as a Service (Saas) deployment whereby the details of the underlying modeling systems are hidden, allowing modelers to perform their science activities without the burden of considering implementation details.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Coronal Magnetism and Forward Solarsoft Idl Package
NASA Astrophysics Data System (ADS)
Gibson, S. E.
2014-12-01
The FORWARD suite of Solar Soft IDL codes is a community resource for model-data comparison, with a particular emphasis on analyzing coronal magnetic fields. FORWARD may be used both to synthesize a broad range of coronal observables, and to access and compare to existing data. FORWARD works with numerical model datacubes, interfaces with the web-served Predictive Science Inc MAS simulation datacubes and the Solar Soft IDL Potential Field Source Surface (PFSS) package, and also includes several analytic models (more can be added). It connects to the Virtual Solar Observatory and other web-served observations to download data in a format directly comparable to model predictions. It utilizes the CHIANTI database in modeling UV/EUV lines, and links to the CLE polarimetry synthesis code for forbidden coronal lines. FORWARD enables "forward-fitting" of specific observations, and helps to build intuition into how the physical properties of coronal magnetic structures translate to observable properties.
Orfanidis, Leonidas; Bamidis, Panagiotis; Eaglestone, Barry
2006-01-01
This paper is concerned with modelling national approaches towards electronic health record systems (NEHRS) development. A model framework is stepwise produced, that allows for the characterisation of the preparedness and the readiness of a country to develop an NEHRS. Secondary data of published reports are considered for the creation of the model. Such sources are identified to mostly originate from within a sample of five developed countries. Factors arising from these sources are identified, coded and scaled, so as to allow for a quantitative application of the model. Instantiation of the latter for the case of the five developed countries is contrasted with the set of countries from South East Europe (SEE). The likely importance and validity of this modelling approach is discussed, using the Delphi method.
NASA Astrophysics Data System (ADS)
Hirakawa, E. T.; Ezzedine, S. M.; Petersson, A.; Sjogreen, B.; Vorobiev, O.; Pitarka, A.; Antoun, T.; Walter, W. R.
2016-12-01
Motions from underground explosions are governed by non-linear hydrodynamic response of material. However, the numerical calculation of this non-linear constitutive behavior is computationally intensive in contrast to the elastic and acoustic linear wave propagation solvers. Here, we develop a hybrid modeling approach with one-way hydrodynamic-to-elastic coupling in three dimensions in order to propagate explosion generated ground motions from the non-linear near-source region to the far-field. Near source motions are computed using GEODYN-L, a Lagrangian hydrodynamics code for high-energy loading of earth materials. Motions on a dense grid of points sampled on two nested shells located beyond the non-linear damaged zone are saved, and then passed to SW4, an anelastic anisotropic fourth order finite difference code for seismic wave modeling. Our coupling strategy is based on the decomposition and uniqueness theorems where motions are introduced into SW4 as a boundary source and continue to propagate as elastic waves at a much lower computational cost than by using GEODYN-L to cover the entire near- and the far-field domain. The accuracy of the numerical calculations and the coupling strategy is demonstrated in cases with a purely elastic medium as well as non-linear medium. Our hybrid modeling approach is applied to SPE-4' and SPE-5 which are the most recent underground chemical explosions conducted at the Nevada National Security Site (NNSS) where the Source Physics Experiments (SPE) are performed. Our strategy by design is capable of incorporating complex non-linear effects near the source as well as volumetric and topographic material heterogeneity along the propagation path to receiver, and provides new prospects for modeling and understanding explosion generated seismic waveforms. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-698608.
The Current Status of Behaviorism and Neurofeedback
ERIC Educational Resources Information Center
Fultz, Dwight E.
2009-01-01
There appears to be no dominant conceptual model for the process and outcomes of neurofeedback among practitioners or manufacturers. Behaviorists are well-positioned to develop a neuroscience-based source code in which neural activity is described in behavioral terms, providing a basis for behavioral conceptualization and education of…
Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation
NASA Astrophysics Data System (ADS)
Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward
1988-08-01
A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.
Numerical simulation of the baking of porous anode carbon in a vertical flue ring furnace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, M.; Melaaen, M.C.
The interaction of pitch pyrolysis in porous anode carbon during heating and volatiles combustion in the flue gas channel has been analyzed to gain insight in the anode baking process. A two-dimensional geometry of a flue gas channel adjacent to a porous flue gas wall, packing coke, and an anode was used for studying the effect of heating rate on temperature gradients and internal gas pressure in the anodes. The mathematical model included porous heat and mass transfer, pitch pyrolysis, combustion of volatiles, radiation, and turbulent channel flow. The mathematical model was developed through source code modification of the computationalmore » fluid dynamics code FLUENT. The model was useful for studying the effects of heating rate, geometry, and anode properties.« less
Modeling the Galaxy-Halo Connection: An open-source approach with Halotools
NASA Astrophysics Data System (ADS)
Hearin, Andrew
2016-03-01
Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
Technical Support Document for Version 3.9.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2011-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC are no longer included, but those sections remain in this document for reference purposes.« less
Technical Support Document for Version 3.9.1 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2012-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC and version 3.9.0 support for 2000 and 2001 IECC are no longer included, but those sections remain in this document for reference purposes.« less
Oak Ridge Spallation Neutron Source (ORSNS) target station design integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamy, T.; Booth, R.; Cleaves, J.
1996-06-01
The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less
SolTrace | Concentrating Solar Power | NREL
NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted
Gamma-ray spectroscopy: The diffuse galactic glow
NASA Technical Reports Server (NTRS)
Hartmann, Dieter H.
1991-01-01
The goal of this project is the development of a numerical code that provides statistical models of the sky distribution of gamma-ray lines due to the production of radioactive isotopes by ongoing Galactic nucleosynthesis. We are particularly interested in quasi-steady emission from novae, supernovae, and stellar winds, but continuum radiation and transient sources must also be considered. We have made significant progress during the first half period of this project and expect the timely completion of a code that can be applied to Oriented Scintillation Spectrometer Experiment (OSSE) Galactic plane survey data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyamoto, K.; Okuda, S.; Hatayama, A.
2013-01-14
To understand the physical mechanism of the beam halo formation in negative ion beams, a two-dimensional particle-in-cell code for simulating the trajectories of negative ions created via surface production has been developed. The simulation code reproduces a beam halo observed in an actual negative ion beam. The negative ions extracted from the periphery of the plasma meniscus (an electro-static lens in a source plasma) are over-focused in the extractor due to large curvature of the meniscus.