Sample records for code library ascl

  1. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  2. The Astrophysics Source Code Library by the numbers

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  3. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  4. The Astrophysics Source Code Library: An Update

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.

  5. Astrophysics Source Code Library Enhancements

    NASA Astrophysics Data System (ADS)

    Hanisch, R. J.; Allen, A.; Berriman, G. B.; DuPrie, K.; Mink, J.; Nemiroff, R. J.; Schmidt, J.; Shamir, L.; Shortridge, K.; Taylor, M.; Teuben, P. J.; Wallin, J.

    2015-09-01

    The Astrophysics Source Code Library (ASCL)1 is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This paper covers these recent changes to the ASCL.

  6. Using the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, P. J.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Wallin, J. F.

    2013-01-01

    The Astrophysics Source Code Library (ASCL) is a free on-line registry of source codes that are of interest to astrophysicists; with over 500 codes, it is the largest collection of scientist-written astrophysics programs in existence. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. An advisory committee formed in 2011 provides input and guides the development and expansion of the ASCL, and since January 2012, all accepted ASCL entries are indexed by ADS. Though software is increasingly important for the advancement of science in astrophysics, these methods are still often hidden from view or difficult to find. The ASCL (ascl.net/) seeks to improve the transparency and reproducibility of research by making these vital methods discoverable, and to provide recognition and incentive to those who write and release programs useful for astrophysics research. This poster provides a description of the ASCL, an update on recent additions, and the changes in the astrophysics community we are starting to see because of the ASCL.

  7. Top ten reasons to register your code with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Berriman, G. Bruce; Mink, Jessica D.; Nemiroff, Robert J.; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Teuben, Peter J.; Wallin, John F.; Warmels, Rein

    2017-01-01

    With 1,400 codes, the Astrophysics Source Code Library (ASCL, ascl.net) is the largest indexed resource for codes used in astronomy research in existence. This free online registry was established in 1999, is indexed by Web of Science and ADS, and is citable, with citations to its entries tracked by ADS. Registering your code with the ASCL is easy with our online submissions system. Making your software available for examination shows confidence in your research and makes your research more transparent, reproducible, and falsifiable. ASCL registration allows your software to be cited on its own merits and provides a citation that is trackable and accepted by all astronomy journals and journals such as Science and Nature. Registration also allows others to find your code more easily. This presentation covers the benefits of registering astronomy research software with the ASCL.

  8. Astrophysics Source Code Library: Incite to Cite!

    NASA Astrophysics Data System (ADS)

    DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.

    2014-05-01

    The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.

  9. The Astrophysics Source Code Library: Supporting software publication and citation

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  10. Making your code citable with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  11. Astrophysics Source Code Library -- Now even better!

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Schmidt, Judy; Berriman, Bruce; DuPrie, Kimberly; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2015-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. Indexed by ADS, it now contains nearly 1,000 codes and with recent major changes, is better than ever! The resource has a new infrastructure that offers greater flexibility and functionality for users, including an easier submission process, better browsing, one-click author search, and an RSS feeder for news. The new database structure is easier to maintain and offers new possibilities for collaboration. Come see what we've done!

  12. Astronomy education and the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, Robert J.

    2016-01-01

    The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.

  13. You've Written a Cool Astronomy Code! Now What Do You Do with It?

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.

    2014-01-01

    Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.

  14. AST: World Coordinate Systems in Astronomy

    NASA Astrophysics Data System (ADS)

    Berry, David S.; Warren-Smith, Rodney F.

    2014-04-01

    The AST library provides a comprehensive range of facilities for attaching world coordinate systems to astronomical data, for retrieving and interpreting that information in a variety of formats, including FITS-WCS, and for generating graphical output based on it. Core projection algorithms are provided by WCSLIB (ascl:1108.003) and astrometry is provided by the PAL (ascl:1606.002) and SOFA (ascl:1403.026) libraries. AST bindings are available in Python (pyast), Java (JNIAST) and Perl (Starlink::AST). AST is used as the plotting and astrometry library in DS9 and GAIA, and is distributed separately and as part of the Starlink software collection.

  15. Bring out your codes! Bring out your codes! (Increasing Software Visibility and Re-use)

    NASA Astrophysics Data System (ADS)

    Allen, A.; Berriman, B.; Brunner, R.; Burger, D.; DuPrie, K.; Hanisch, R. J.; Mann, R.; Mink, J.; Sandin, C.; Shortridge, K.; Teuben, P.

    2013-10-01

    Progress is being made in code discoverability and preservation, but as discussed at ADASS XXI, many codes still remain hidden from public view. With the Astrophysics Source Code Library (ASCL) now indexed by the SAO/NASA Astrophysics Data System (ADS), the introduction of a new journal, Astronomy & Computing, focused on astrophysics software, and the increasing success of education efforts such as Software Carpentry and SciCoder, the community has the opportunity to set a higher standard for its science by encouraging the release of software for examination and possible reuse. We assembled representatives of the community to present issues inhibiting code release and sought suggestions for tackling these factors. The session began with brief statements by panelists; the floor was then opened for discussion and ideas. Comments covered a diverse range of related topics and points of view, with apparent support for the propositions that algorithms should be readily available, code used to produce published scientific results should be made available, and there should be discovery mechanisms to allow these to be found easily. With increased use of resources such as GitHub (for code availability), ASCL (for code discovery), and a stated strong preference from the new journal Astronomy & Computing for code release, we expect to see additional progress over the next few years.

  16. AGAMA: Action-based galaxy modeling framework

    NASA Astrophysics Data System (ADS)

    Vasiliev, Eugene

    2018-05-01

    The AGAMA library models galaxies. It computes gravitational potential and forces, performs orbit integration and analysis, and can convert between position/velocity and action/angle coordinates. It offers a framework for finding best-fit parameters of a model from data and self-consistent multi-component galaxy models, and contains useful auxiliary utilities such as various mathematical routines. The core of the library is written in C++, and there are Python and Fortran interfaces. AGAMA may be used as a plugin for the stellar-dynamical software packages galpy (ascl:1411.008), AMUSE (ascl:1107.007), and NEMO (ascl:1010.051).

  17. VisiOmatic: Celestial image viewer

    NASA Astrophysics Data System (ADS)

    Bertin, Emmanuel; Marmo, Chiara; Pillay, Ruven

    2014-08-01

    VisiOmatic is a web client for IIPImage (ascl:1408.009) and is used to visualize and navigate through large science images from remote locations. It requires STIFF (ascl:1110.006), is based on the Leaflet Javascript library, and works on both touch-based and mouse-based devices.

  18. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  19. TOASTing Your Images With Montage

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, John

    2017-01-01

    The Montage image mosaic engine is a scalable toolkit for creating science-grade mosaics of FITS files, according to the user's specifications of coordinates, projection, sampling, and image rotation. It is written in ANSI-C and runs on all common *nix-based platforms. The code is freely available and is released with a BSD 3-clause license. Version 5 is a major upgrade to Montage, and provides support for creating images that can be consumed by the World Wide Telescope (WWT). Montage treats the TOAST sky tessellation scheme, used by the WWT, as a spherical projection like those in the WCStools library. Thus images in any projection can be converted to the TOAST projection by Montage’s reprojection services. These reprojections can be performed at scale on high-performance platforms and on desktops. WWT consumes PNG or JPEG files, organized according to WWT’s tiling and naming scheme. Montage therefore provides a set of dedicated modules to create the required files from FITS images that contain the TOAST projection. There are two other major features of Version 5. It supports processing of HEALPix files to any projection in the WCS tools library. And it can be built as a library that can be called from other languages, primarily Python. http://montage.ipac.caltech.edu.GitHub download page: https://github.com/Caltech-IPAC/Montage.ASCL record: ascl:1010.036. DOI: dx.doi.org/10.5281/zenodo.49418 Montage is funded by the National Science Foundation under Grant Number ACI-1440620,

  20. A Positive Regulatory Loop between a Wnt-Regulated Non-coding RNA and ASCL2 Controls Intestinal Stem Cell Fate.

    PubMed

    Giakountis, Antonis; Moulos, Panagiotis; Zarkou, Vasiliki; Oikonomou, Christina; Harokopos, Vaggelis; Hatzigeorgiou, Artemis G; Reczko, Martin; Hatzis, Pantelis

    2016-06-21

    The canonical Wnt pathway plays a central role in stem cell maintenance, differentiation, and proliferation in the intestinal epithelium. Constitutive, aberrant activity of the TCF4/β-catenin transcriptional complex is the primary transforming factor in colorectal cancer. We identify a nuclear long non-coding RNA, termed WiNTRLINC1, as a direct target of TCF4/β-catenin in colorectal cancer cells. WiNTRLINC1 positively regulates the expression of its genomic neighbor ASCL2, a transcription factor that controls intestinal stem cell fate. WiNTRLINC1 interacts with TCF4/β-catenin to mediate the juxtaposition of its promoter with the regulatory regions of ASCL2. ASCL2, in turn, regulates WiNTRLINC1 transcriptionally, closing a feedforward regulatory loop that controls stem cell-related gene expression. This regulatory circuitry is highly amplified in colorectal cancer and correlates with increased metastatic potential and decreased patient survival. Our results uncover the interplay between non-coding RNA-mediated regulation and Wnt signaling and point to the diagnostic and therapeutic potential of WiNTRLINC1. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  1. 3D-PDR: Three-dimensional photodissociation region code

    NASA Astrophysics Data System (ADS)

    Bisbas, T. G.; Bell, T. A.; Viti, S.; Yates, J.; Barlow, M. J.

    2018-03-01

    3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.

  2. Shadowfax: Moving mesh hydrodynamical integration code

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Bert

    2016-05-01

    Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).

  3. libprofit: Image creation from luminosity profiles

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Taranu, D.; Tobar, R.

    2016-12-01

    libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).

  4. GIZMO: Multi-method magneto-hydrodynamics+gravity code

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2014-10-01

    GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).

  5. BinMag: Widget for comparing stellar observed with theoretical spectra

    NASA Astrophysics Data System (ADS)

    Kochukhov, O.

    2018-05-01

    BinMag examines theoretical stellar spectra computed with Synth/SynthMag/Synmast/Synth3/SME spectrum synthesis codes and compare them to observations. An IDL widget program, BinMag applies radial velocity shift and broadening to the theoretical spectra to account for the effects of stellar rotation, radial-tangential macroturbulence, instrumental smearing. The code can also simulate spectra of spectroscopic binary stars by appropriate coaddition of two synthetic spectra. Additionally, BinMag can be used to measure equivalent width, fit line profile shapes with analytical functions, and to automatically determine radial velocity and broadening parameters. BinMag interfaces with the Synth3 (ascl:1212.010) and SME (ascl:1202.013) codes, allowing the user to determine chemical abundances and stellar atmospheric parameters from the observed spectra.

  6. galstreams: Milky Way streams footprint library and toolkit

    NASA Astrophysics Data System (ADS)

    Mateu, Cecilia

    2017-11-01

    galstreams provides a compilation of spatial information for known stellar streams and overdensities in the Milky Way and includes Python tools for visualizing them. ASCII tables are also provided for quick viewing of the stream's footprints using TOPCAT (ascl:1101.010).

  7. Libpsht: Algorithms for Efficient Spherical Harmonic Transforms

    NASA Astrophysics Data System (ADS)

    Reinecke, Martin

    2010-10-01

    Libpsht (or "library for Performing Spherical Harmonic Transforms") is a collection of algorithms for efficient conversion between spatial-domain and spectral-domain representations of data defined on the sphere. The package supports transforms of scalars as well as spin-1 and spin-2 quantities, and can be used for a wide range of pixelisations (including HEALPix, GLESP and ECP). It will take advantage of hardware features like multiple processor cores and floating-point vector operations, if available. Even without this additional acceleration, the employed algorithms are among the most efficient (in terms of CPU time as well as memory consumption) currently being used in the astronomical community. The library is written in strictly standard-conforming C90, ensuring portability to many different hard- and software platforms, and allowing straightforward integration with codes written in various programming languages like C, C++, Fortran, Python etc. Libpsht is distributed under the terms of the GNU General Public License (GPL) version 2. Development on this project has ended; its successor is libsharp (ascl:1402.033).

  8. VizieR Online Data Catalog: 33 RR Lyrae observed in Pisces with K2-E2 (Molnar+, 2015)

    NASA Astrophysics Data System (ADS)

    Molnar, L.; Szabo, R.; Moskalik, P. A.; Nemec, J. M.; Guggenberger, E.; Smolec, R.; Poleski, R.; Plachy, E.; Kolenberg, K.; Kollath, Z.

    2016-03-01

    Kepler observed a stellar field around the vernal equinox point in Pisces (centre coordinates: RA=359°, DE=-2°) between 2014 February 04 and 13. The primary goal of this K2 Two-Wheel Concept Engineering Test (hereafter K2-E2) was to test the performance of the telescope in fine guidance mode. As well, the observations of nearly 2000 targets were made available for the scientific community. We identified 33 potential RR Lyrae stars in the K2-E2 sample and extracted their photometric data with the pyke software, developed for the Kepler mission by the Kepler Guest Observer Office (Still & Barclay, 2012, Astrophysics Source Code Library record ascl:1208.004). (6 data files).

  9. nanopipe: Calibration and data reduction pipeline for pulsar timing

    NASA Astrophysics Data System (ADS)

    Demorest, Paul B.

    2018-03-01

    nanopipe is a data reduction pipeline for calibration, RFI removal, and pulse time-of-arrival measurement from radio pulsar data. It was developed primarily for use by the NANOGrav project. nanopipe is written in Python, and depends on the PSRCHIVE (ascl:1105.014) library.

  10. LSDCat: Detection and cataloguing of emission-line sources in integral-field spectroscopy datacubes

    NASA Astrophysics Data System (ADS)

    Herenz, Edmund Christian; Wisotzki, Lutz

    2017-06-01

    We present a robust, efficient, and user-friendly algorithm for detecting faint emission-line sources in large integral-field spectroscopic datacubes together with the public release of the software package Line Source Detection and Cataloguing (LSDCat). LSDCat uses a three-dimensional matched filter approach, combined with thresholding in signal-to-noise, to build a catalogue of individual line detections. In a second pass, the detected lines are grouped into distinct objects, and positions, spatial extents, and fluxes of the detected lines are determined. LSDCat requires only a small number of input parameters, and we provide guidelines for choosing appropriate values. The software is coded in Python and capable of processing very large datacubes in a short time. We verify the implementation with a source insertion and recovery experiment utilising a real datacube taken with the MUSE instrument at the ESO Very Large Telescope. The LSDCat software is available for download at http://muse-vlt.eu/science/tools and via the Astrophysics Source Code Library at http://ascl.net/1612.002

  11. rfpipe: Radio interferometric transient search pipeline

    NASA Astrophysics Data System (ADS)

    Law, Casey J.

    2017-10-01

    rfpipe supports Python-based analysis of radio interferometric data (especially from the Very Large Array) and searches for fast radio transients. This extends on the rtpipe library (ascl:1706.002) with new approaches to parallelization, acceleration, and more portable data products. rfpipe can run in standalone mode or be in a cluster environment.

  12. R-spondin1/Wnt-enhanced Ascl2 autoregulation controls the self-renewal of colorectal cancer progenitor cells.

    PubMed

    Ye, Jun; Liu, Shanxi; Shang, Yangyang; Chen, Haoyuan; Wang, Rongquan

    2018-06-25

    The Wnt signaling pathway controls stem cell identity in the intestinal epithelium and cancer stem cells (CSCs). The transcription factor Ascl2 (Wnt target gene) is fate decider of intestinal cryptic stem cells and colon cancer stem cells. It is unclear how Wnt signaling is translated into Ascl2 expression and keeping the self-renewal of CRC progenitor cells. We showed that the exogenous Ascl2 in colorectal cancer (CRC) cells activated the endogenous Ascl2 expression via a direct autoactivatory loop, including Ascl2 binding to its own promoter and further transcriptional activation. Higher Ascl2 expression in human CRC cancerous tissues led to greater enrichment in Ascl2 immunoprecipitated DNA within the Ascl2 promoter in the CRC cancerous sample than the peri-cancerous mucosa. Ascl2 binding to its own promoter and inducing further transcriptional activation of the Ascl2 gene was predominant in the CD133 + CD44 + CRC population. R-spondin1/Wnt activated Ascl2 expression dose-dependently in the CD133 + CD44 + CRC population, but not in the CD133 - CD44 - CRC population, which was caused by differences in Ascl2 autoregulation under R-spondin1/Wnt activation. R-spondin1/Wnt treatment in the CD133 + CD44 + or CRC CD133 - CD44 - populations exerted a different pattern of stemness maintenance, which was defined by alterations of the mRNA levels of stemness-associated genes, the protein expression levels (Bmi1, C-myc, Oct-4 and Nanog) and tumorsphere formation. The results indicated that Ascl2 autoregulation formed a transcriptional switch that was enhanced by Wnt signaling in the CD133 + CD44 + CRC population, thus conferring their self-renewal.

  13. What to do with a Dead Research Code

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  14. Ascl1 promotes tangential migration and confines migratory routes by induction of Ephb2 in the telencephalon

    PubMed Central

    Liu, Yuan-Hsuan; Tsai, Jin-Wu; Chen, Jia-Long; Yang, Wan-Shan; Chang, Pei-Ching; Cheng, Pei-Lin; Turner, David L.; Yanagawa, Yuchio; Wang, Tsu-Wei; Yu, Jenn-Yah

    2017-01-01

    During development, cortical interneurons generated from the ventral telencephalon migrate tangentially into the dorsal telencephalon. Although Achaete-scute family bHLH transcription factor 1 (Ascl1) plays important roles in the developing telencephalon, whether Ascl1 regulates tangential migration remains unclear. Here, we found that Ascl1 promoted tangential migration along the ventricular zone/subventricular zone (VZ/SVZ) and intermediate zone (IZ) of the dorsal telencephalon. Distal-less homeobox 2 (Dlx2) acted downstream of Ascl1 in promoting tangential migration along the VZ/SVZ but not IZ. We further identified Eph receptor B2 (Ephb2) as a direct target of Ascl1. Knockdown of EphB2 disrupted the separation of the VZ/SVZ and IZ migratory routes. Ephrin-A5, a ligand of EphB2, was sufficient to repel both Ascl1-expressing cells in vitro and tangentially migrating cortical interneurons in vivo. Together, our results demonstrate that Ascl1 induces expression of Dlx2 and Ephb2 to maintain distinct tangential migratory routes in the dorsal telencephalon. PMID:28276447

  15. System analysis identifies distinct and common functional networks governed by transcription factor ASCL1, in glioma and small cell lung cancer.

    PubMed

    Donakonda, Sainitin; Sinha, Swati; Dighe, Shrinivas Nivrutti; Rao, Manchanahalli R Satyanarayana

    2017-07-25

    ASCL1 is a basic Helix-Loop-Helix transcription factor (TF), which is involved in various cellular processes like neuronal development and signaling pathways. Transcriptome profiling has shown that ASCL1 overexpression plays an important role in the development of glioma and Small Cell Lung Carcinoma (SCLC), but distinct and common molecular mechanisms regulated by ASCL1 in these cancers are unknown. In order to understand how it drives the cellular functional network in these two tumors, we generated a gene expression profile in a glioma cell line (U87MG) to identify ASCL1 gene targets by an si RNA silencing approach and then compared this with a publicly available dataset of similarly silenced SCLC (NCI-H1618 cells). We constructed TF-TF and gene-gene interactions, as well as protein interaction networks of ASCL1 regulated genes in glioma and SCLC cells. Detailed network analysis uncovered various biological processes governed by ASCL1 target genes in these two tumor cell lines. We find that novel ASCL1 functions related to mitosis and signaling pathways influencing development and tumor growth are affected in both glioma and SCLC cells. In addition, we also observed ASCL1 governed functional networks that are distinct to glioma and SCLC.

  16. PpASCL, the Physcomitrella patens Anther-Specific Chalcone Synthase-Like Enzyme Implicated in Sporopollenin Biosynthesis, Is Needed for Integrity of the Moss Spore Wall and Spore Viability

    PubMed Central

    Daku, Rhys M.; Rabbi, Fazle; Buttigieg, Josef; Coulson, Ian M.; Horne, Derrick; Martens, Garnet; Ashton, Neil W.; Suh, Dae-Yeon

    2016-01-01

    Sporopollenin is the main constituent of the exine layer of spore and pollen walls. The anther-specific chalcone synthase-like (ASCL) enzyme of Physcomitrella patens, PpASCL, has previously been implicated in the biosynthesis of sporopollenin, the main constituent of exine and perine, the two outermost layers of the moss spore cell wall. We made targeted knockouts of the corresponding gene, PpASCL, and phenotypically characterized ascl sporophytes and spores at different developmental stages. Ascl plants developed normally until late in sporophytic development, when the spores produced were structurally aberrant and inviable. The development of the ascl spore cell wall appeared to be arrested early in microspore development, resulting in small, collapsed spores with altered surface morphology. The typical stratification of the spore cell wall was absent with only an abnormal perine recognisable above an amorphous layer possibly representing remnants of compromised intine and/or exine. Equivalent resistance of the spore walls of ascl mutants and the control strain to acetolysis suggests the presence of chemically inert, defective sporopollenin in the mutants. Anatomical abnormalities of late-stage ascl sporophytes include a persistent large columella and an air space incompletely filled with spores. Our results indicate that the evolutionarily conserved PpASCL gene is needed for proper construction of the spore wall and for normal maturation and viability of moss spores. PMID:26752629

  17. voevent-parse: Parse, manipulate, and generate VOEvent XML packets

    NASA Astrophysics Data System (ADS)

    Staley, Tim D.

    2014-11-01

    voevent-parse, written in Python, parses, manipulates, and generates VOEvent XML packets; it is built atop lxml.objectify. Details of transients detected by many projects, including Fermi, Swift, and the Catalina Sky Survey, are currently made available as VOEvents, which is also the standard alert format by future facilities such as LSST and SKA. However, working with XML and adhering to the sometimes lengthy VOEvent schema can be a tricky process. voevent-parse provides convenience routines for common tasks, while allowing the user to utilise the full power of the lxml library when required. An earlier version of voevent-parse was part of the pysovo (ascl:1411.002) library.

  18. FAST: Fitting and Assessment of Synthetic Templates

    NASA Astrophysics Data System (ADS)

    Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis

    2018-03-01

    FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.

  19. Functionalizing Ascl1 with Novel Intracellular Protein Delivery Technology for Promoting Neuronal Differentiation of Human Induced Pluripotent Stem Cells.

    PubMed

    Robinson, Meghan; Chapani, Parv; Styan, Tara; Vaidyanathan, Ranjani; Willerth, Stephanie Michelle

    2016-08-01

    Pluripotent stem cells can become any cell type found in the body. Accordingly, one of the major challenges when working with pluripotent stem cells is producing a highly homogenous population of differentiated cells, which can then be used for downstream applications such as cell therapies or drug screening. The transcription factor Ascl1 plays a key role in neural development and previous work has shown that Ascl1 overexpression using viral vectors can reprogram fibroblasts directly into neurons. Here we report on how a recombinant version of the Ascl1 protein functionalized with intracellular protein delivery technology (Ascl1-IPTD) can be used to rapidly differentiate human induced pluripotent stem cells (hiPSCs) into neurons. We first evaluated a range of Ascl1-IPTD concentrations to determine the most effective amount for generating neurons from hiPSCs cultured in serum free media. Next, we looked at the frequency of Ascl1-IPTD supplementation in the media on differentiation and found that one time supplementation is sufficient enough to trigger the neural differentiation process. Ascl1-IPTD was efficiently taken up by the hiPSCs and enabled rapid differentiation into TUJ1-positive and NeuN-positive populations with neuronal morphology after 8 days. After 12 days of culture, hiPSC-derived neurons produced by Ascl1-IPTD treatment exhibited greater neurite length and higher numbers of branch points compared to neurons derived using a standard neural progenitor differentiation protocol. This work validates Ascl1-IPTD as a powerful tool for engineering neural tissue from pluripotent stem cells.

  20. Ascl1 controls the number and distribution of astrocytes and oligodendrocytes in the gray matter and white matter of the spinal cord

    PubMed Central

    Vue, Tou Yia; Kim, Euiseok J.; Parras, Carlos M.; Guillemot, Francois; Johnson, Jane E.

    2014-01-01

    Glia constitute the majority of cells in the mammalian central nervous system and are crucial for neurological function. However, there is an incomplete understanding of the molecular control of glial cell development. We find that the transcription factor Ascl1 (Mash1), which is best known for its role in neurogenesis, also functions in both astrocyte and oligodendrocyte lineages arising in the mouse spinal cord at late embryonic stages. Clonal fate mapping in vivo reveals heterogeneity in Ascl1-expressing glial progenitors and shows that Ascl1 defines cells that are restricted to either gray matter (GM) or white matter (WM) as astrocytes or oligodendrocytes. Conditional deletion of Ascl1 post-neurogenesis shows that Ascl1 is required during oligodendrogenesis for generating the correct numbers of WM but not GM oligodendrocyte precursor cells, whereas during astrocytogenesis Ascl1 functions in balancing the number of dorsal GM protoplasmic astrocytes with dorsal WM fibrous astrocytes. Thus, in addition to its function in neurogenesis, Ascl1 marks glial progenitors and controls the number and distribution of astrocytes and oligodendrocytes in the GM and WM of the spinal cord. PMID:25249462

  1. Ascl1 controls the number and distribution of astrocytes and oligodendrocytes in the gray matter and white matter of the spinal cord.

    PubMed

    Vue, Tou Yia; Kim, Euiseok J; Parras, Carlos M; Guillemot, Francois; Johnson, Jane E

    2014-10-01

    Glia constitute the majority of cells in the mammalian central nervous system and are crucial for neurological function. However, there is an incomplete understanding of the molecular control of glial cell development. We find that the transcription factor Ascl1 (Mash1), which is best known for its role in neurogenesis, also functions in both astrocyte and oligodendrocyte lineages arising in the mouse spinal cord at late embryonic stages. Clonal fate mapping in vivo reveals heterogeneity in Ascl1-expressing glial progenitors and shows that Ascl1 defines cells that are restricted to either gray matter (GM) or white matter (WM) as astrocytes or oligodendrocytes. Conditional deletion of Ascl1 post-neurogenesis shows that Ascl1 is required during oligodendrogenesis for generating the correct numbers of WM but not GM oligodendrocyte precursor cells, whereas during astrocytogenesis Ascl1 functions in balancing the number of dorsal GM protoplasmic astrocytes with dorsal WM fibrous astrocytes. Thus, in addition to its function in neurogenesis, Ascl1 marks glial progenitors and controls the number and distribution of astrocytes and oligodendrocytes in the GM and WM of the spinal cord. © 2014. Published by The Company of Biologists Ltd.

  2. MontePython 3: Parameter inference code for cosmology

    NASA Astrophysics Data System (ADS)

    Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon

    2018-05-01

    MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.

  3. Valproic acid silencing of ascl1b/Ascl1 results in the failure of serotonergic differentiation in a zebrafish model of fetal valproate syndrome

    PubMed Central

    Jacob, John; Ribes, Vanessa; Moore, Steven; Constable, Sean C.; Sasai, Noriaki; Gerety, Sebastian S.; Martin, Darren J.; Sergeant, Chris P.; Wilkinson, David G.; Briscoe, James

    2014-01-01

    Fetal valproate syndrome (FVS) is caused by in utero exposure to the drug sodium valproate. Valproate is used worldwide for the treatment of epilepsy, as a mood stabiliser and for its pain-relieving properties. In addition to birth defects, FVS is associated with an increased risk of autism spectrum disorder (ASD), which is characterised by abnormal behaviours. Valproate perturbs multiple biochemical pathways and alters gene expression through its inhibition of histone deacetylases. Which, if any, of these mechanisms is relevant to the genesis of its behavioural side effects is unclear. Neuroanatomical changes associated with FVS have been reported and, among these, altered serotonergic neuronal differentiation is a consistent finding. Altered serotonin homeostasis is also associated with autism. Here we have used a chemical-genetics approach to investigate the underlying molecular defect in a zebrafish FVS model. Valproate causes the selective failure of zebrafish central serotonin expression. It does so by downregulating the proneural gene ascl1b, an ortholog of mammalian Ascl1, which is a known determinant of serotonergic identity in the mammalian brainstem. ascl1b is sufficient to rescue serotonin expression in valproate-treated embryos. Chemical and genetic blockade of the histone deacetylase Hdac1 downregulates ascl1b, consistent with the Hdac1-mediated silencing of ascl1b expression by valproate. Moreover, tonic Notch signalling is crucial for ascl1b repression by valproate. Concomitant blockade of Notch signalling restores ascl1b expression and serotonin expression in both valproate-exposed and hdac1 mutant embryos. Together, these data provide a molecular explanation for serotonergic defects in FVS and highlight an epigenetic mechanism for genome-environment interaction in disease. PMID:24135485

  4. GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

    NASA Astrophysics Data System (ADS)

    N-Body Shop

    2017-10-01

    Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

  5. Upregulation of ASCL1 and inhibition of Notch signaling pathway characterize progressive astrocytoma.

    PubMed

    Somasundaram, Kumaravel; Reddy, Sreekanth P; Vinnakota, Katyayni; Britto, Ramona; Subbarayan, Madhavan; Nambiar, Sandeep; Hebbar, Aparna; Samuel, Cini; Shetty, Mitesh; Sreepathi, Hari Kishore; Santosh, Vani; Hegde, Alangar Sathyaranjandas; Hegde, Sridevi; Kondaiah, Paturu; Rao, M R S

    2005-10-27

    Astrocytoma is the most common type of brain cancer constituting more than half of all brain tumors. With an aim to identify markers describing astrocytoma progression, we have carried out microarray analysis of astrocytoma samples of different grades using cDNA microarray containing 1152 cancer-specific genes. Data analysis identified several differentially regulated genes between normal brain tissue and astrocytoma as well as between grades II/III astrocytoma and glioblastoma multiforme (GBM; grade IV). We found several genes known to be involved in malignancy including Achaete-scute complex-like 1 (Drosophila) (ASCL1; Hash 1). As ASCL has been implicated in neuroendocrine, medullary thyroid and small-cell lung cancers, we chose to examine the role of ASCL1 in the astrocytoma development. Our data revealed that ASCL1 is overexpressed in progressive astrocytoma as evidenced by increased levels of ASCL1 transcripts in 85.71% (6/7) of grade II diffuse astrocytoma (DA), 90% (9/10) of grade III anaplastic astrocytoma (AA) and 87.5% (7/8) of secondary GBMs, while the majority of primary de novo GBMs expressed similar to or less than normal brain levels (66.67%; 8/12). ASCL1 upregulation in progressive astrocytoma is accompanied by inhibition of Notch signaling as seen by uninduced levels of HES1, a transcriptional target of Notch1, increased levels of HES6, a dominant-negative inhibitor of HES1-mediated repression of ASCL1, and increased levels of Notch ligand Delta1, which is capable of inhibiting Notch signaling by forming intracellular Notch ligand autonomous complexes. Our results imply that inhibition of Notch signaling may be an important early event in the development of grade II DA and subsequent progression to grade III AA and secondary GBM. Furthermore, ASCL1 appears to be a putative marker to distinguish primary GBM from secondary GBM.

  6. Ascl1 as a Novel Player in the Ptf1a Transcriptional Network for GABAergic Cell Specification in the Retina

    PubMed Central

    Parlier, Damien; Pretto, Silvia; Hamdache, Johanna; Vernier, Philippe; Locker, Morgane; Bellefroid, Eric; Perron, Muriel

    2014-01-01

    In contrast with the wealth of data involving bHLH and homeodomain transcription factors in retinal cell type determination, the molecular bases underlying neurotransmitter subtype specification is far less understood. Using both gain and loss of function analyses in Xenopus, we investigated the putative implication of the bHLH factor Ascl1 in this process. We found that in addition to its previously characterized proneural function, Ascl1 also contributes to the specification of the GABAergic phenotype. We showed that it is necessary for retinal GABAergic cell genesis and sufficient in overexpression experiments to bias a subset of retinal precursor cells towards a GABAergic fate. We also analysed the relationships between Ascl1 and a set of other bHLH factors using an in vivo ectopic neurogenic assay. We demonstrated that Ascl1 has unique features as a GABAergic inducer and is epistatic over factors endowed with glutamatergic potentialities such as Neurog2, NeuroD1 or Atoh7. This functional specificity is conferred by the basic DNA binding domain of Ascl1 and involves a specific genetic network, distinct from that underlying its previously demonstrated effects on catecholaminergic differentiation. Our data show that GABAergic inducing activity of Ascl1 requires the direct transcriptional regulation of Ptf1a, providing therefore a new piece of the network governing neurotransmitter subtype specification during retinogenesis. PMID:24643195

  7. Shwirl: Meaningful coloring of spectral cube data with volume rendering

    NASA Astrophysics Data System (ADS)

    Vohl, Dany

    2017-04-01

    Shwirl visualizes spectral data cubes with meaningful coloring methods. The program has been developed to investigate transfer functions, which combines volumetric elements (or voxels) to set the color, and graphics shaders, functions used to compute several properties of the final image such as color, depth, and/or transparency, as enablers for scientific visualization of astronomical data. The program uses Astropy (ascl:1304.002) to handle FITS files and World Coordinate System, Qt (and PyQt) for the user interface, and VisPy, an object-oriented Python visualization library binding onto OpenGL.

  8. FSFE: Fake Spectra Flux Extractor

    NASA Astrophysics Data System (ADS)

    Bird, Simeon

    2017-10-01

    The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO.

  9. galstep: Initial conditions for spiral galaxy simulations

    NASA Astrophysics Data System (ADS)

    Ruggiero, Rafael

    2017-11-01

    galstep generates initial conditions for disk galaxy simulations with GADGET-2 (ascl:0003.001), RAMSES (ascl:1011.007) and GIZMO (ascl:1410.003), including a stellar disk, a gaseous disk, a dark matter halo and a stellar bulge. The first two components follow an exponential density profile, and the last two a Dehnen density profile with gamma=1 by default, corresponding to a Hernquist profile.

  10. Loss of ascl1a prevents secretory cell differentiation within the zebrafish intestinal epithelium resulting in a loss of distal intestinal motility

    PubMed Central

    Roach, Gillian; Wallace, Rachel Heath; Cameron, Amy; Ozel, Rifat Emrah; Hongay, Cintia F.; Baral, Reshica; Andreescu, Silvana; Wallace, Kenneth N.

    2013-01-01

    The vertebrate intestinal epithelium is renewed continuously from stem cells at the base of the crypt in mammals or base of the fold in fish over the life of the organism. As stem cells divide, newly formed epithelial cells make an initial choice between a secretory or enterocyte fate. This choice has previously been demonstrated to involve Notch signaling as well as Atonal and Her transcription factors in both embryogenesis and adults. Here, we demonstrate that in contrast to the atoh1 in mammals, ascl1a is responsible for formation of secretory cells in zebrafish. ascl1a−/− embryos lack all intestinal epithelial secretory cells and instead differentiate into enterocytes. ascl1a−/− embryos also fail to induce intestinal epithelial expression of deltaD suggesting that ascl1a plays a role in initiation of Notch signaling. Inhibition of Notch signaling increases the number of ascl1a and deltaD expressing intestinal epithelial cells as well as the number of developing secretory cells during two specific time periods: between 30 and 34 hpf and again between 64 and 74 hpf. Loss of enteroendocrine products results in loss of anterograde motility in ascl1a−/− embryos. 5HT produced by enterochromaffin cells is critical in motility and secretion within the intestine. We find that addition of exogenous 5HT to ascl1a−/− embryos at near physiological levels (measured by differential pulse voltammetry) induce anterograde motility at similar levels to wild type velocity, distance, and frequency. Removal or doubling the concentration of 5HT in WT embryos does not significantly affect anterograde motility, suggesting that the loss of additional enteroendocrine products in ascl1a−/− embryos also contributes to intestinal motility. Thus, zebrafish intestinal epithelial cells appear to have a common secretory progenitor from which all subtypes form. Loss of enteroendocrine cells reveals the critical need for enteroendocrine products in maintenance of normal intestinal motility. PMID:23353550

  11. Co-delivery of doxorubicin and siRNA using octreotide-conjugated gold nanorods for targeted neuroendocrine cancer therapy

    NASA Astrophysics Data System (ADS)

    Xiao, Yuling; Jaskula-Sztul, Renata; Javadi, Alireza; Xu, Wenjin; Eide, Jacob; Dammalapati, Ajitha; Kunnimalaiyaan, Muthusamy; Chen, Herbert; Gong, Shaoqin

    2012-10-01

    A multifunctional gold (Au) nanorod (NR)-based nanocarrier capable of co-delivering small interfering RNA (siRNA) against achaete-scute complex-like 1 (ASCL1) and an anticancer drug (doxorubicin (DOX)) specifically to neuroendocrine (NE) cancer cells was developed and characterized for combined chemotherapy and siRNA-mediated gene silencing. The Au NR was conjugated with (1) DOX, an anticancer drug, via a pH-labile hydrazone linkage to enable pH-controlled drug release, (2) polyarginine, a cationic polymer for complexing siRNA, and (3) octreotide (OCT), a tumor-targeting ligand, to specifically target NE cancer cells with overexpressed somatostatin receptors. The Au NR-based nanocarriers exhibited a uniform size distribution as well as pH-sensitive drug release. The OCT-conjugated Au NR-based nanocarriers (Au-DOX-OCT, targeted) exhibited a much higher cellular uptake in a human carcinoid cell line (BON cells) than non-targeted Au NR-based nanocarriers (Au-DOX) as measured by both flow cytometry and confocal laser scanning microscopy (CLSM). Moreover, Au-DOX-OCT-ASCL1 siRNA (Au-DOX-OCT complexed with ASCL1 siRNA) resulted in significantly higher gene silencing in NE cancer cells than Au-DOX-ASCL1 siRNA (non-targeted Au-DOX complexed with ASCL1 siRNA) as measured by an immunoblot analysis. Additionally, Au-DOX-OCT-ASCL1 siRNA was the most efficient nanocarrier at altering the NE phenotype of NE cancer cells and showed the strongest anti-proliferative effect. Thus, combined chemotherapy and RNA silencing using NE tumor-targeting Au NR-based nanocarriers could potentially enhance the therapeutic outcomes in treating NE cancers.A multifunctional gold (Au) nanorod (NR)-based nanocarrier capable of co-delivering small interfering RNA (siRNA) against achaete-scute complex-like 1 (ASCL1) and an anticancer drug (doxorubicin (DOX)) specifically to neuroendocrine (NE) cancer cells was developed and characterized for combined chemotherapy and siRNA-mediated gene silencing. The Au NR was conjugated with (1) DOX, an anticancer drug, via a pH-labile hydrazone linkage to enable pH-controlled drug release, (2) polyarginine, a cationic polymer for complexing siRNA, and (3) octreotide (OCT), a tumor-targeting ligand, to specifically target NE cancer cells with overexpressed somatostatin receptors. The Au NR-based nanocarriers exhibited a uniform size distribution as well as pH-sensitive drug release. The OCT-conjugated Au NR-based nanocarriers (Au-DOX-OCT, targeted) exhibited a much higher cellular uptake in a human carcinoid cell line (BON cells) than non-targeted Au NR-based nanocarriers (Au-DOX) as measured by both flow cytometry and confocal laser scanning microscopy (CLSM). Moreover, Au-DOX-OCT-ASCL1 siRNA (Au-DOX-OCT complexed with ASCL1 siRNA) resulted in significantly higher gene silencing in NE cancer cells than Au-DOX-ASCL1 siRNA (non-targeted Au-DOX complexed with ASCL1 siRNA) as measured by an immunoblot analysis. Additionally, Au-DOX-OCT-ASCL1 siRNA was the most efficient nanocarrier at altering the NE phenotype of NE cancer cells and showed the strongest anti-proliferative effect. Thus, combined chemotherapy and RNA silencing using NE tumor-targeting Au NR-based nanocarriers could potentially enhance the therapeutic outcomes in treating NE cancers. Electronic supplementary information (ESI) available: Additional flow cytometry histogram profiles of DOX fluorescence and ASCL1 knockdown results. See DOI: 10.1039/c2nr31853a

  12. feets: feATURE eXTRACTOR for tIME sERIES

    NASA Astrophysics Data System (ADS)

    Cabral, Juan; Sanchez, Bruno; Ramos, Felipe; Gurovich, Sebastián; Granitto, Pablo; VanderPlas, Jake

    2018-06-01

    feets characterizes and analyzes light-curves from astronomical photometric databases for modelling, classification, data cleaning, outlier detection and data analysis. It uses machine learning algorithms to determine the numerical descriptors that characterize and distinguish the different variability classes of light-curves; these range from basic statistical measures such as the mean or standard deviation to complex time-series characteristics such as the autocorrelation function. The library is not restricted to the astronomical field and could also be applied to any kind of time series. This project is a derivative work of FATS (ascl:1711.017).

  13. TRIPPy: Python-based Trailed Source Photometry

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-05-01

    TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.

  14. HENDRICS: High ENergy Data Reduction Interface from the Command Shell

    NASA Astrophysics Data System (ADS)

    Bachetti, Matteo

    2018-05-01

    HENDRICS, a rewrite and update to MaLTPyNT (ascl:1502.021), contains command-line scripts based on Stingray (ascl:1608.001) to perform a quick-look (spectral-)timing analysis of X-ray data, treating the gaps in the data due, e.g., to occultation from the Earth or passages through the SAA, properly. Despite its original main focus on NuSTAR, HENDRICS can perform standard aperiodic timing analysis on X-ray data from, in principle, any other satellite, and its features include power density and cross spectra, time lags, pulsar searches with the Epoch folding and the Z_n^2 statistics, color-color and color-intensity diagrams. The periodograms produced by HENDRICS (such as a power density spectrum or a cospectrum) can be saved in a format compatible with XSPEC (ascl:9910.005) or ISIS (ascl:1302.002)

  15. COCOA: Simulating Observations of Star Cluster Simulations

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2017-03-01

    COCOA (Cluster simulatiOn Comparison with ObservAtions) creates idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. The code can simulate optical observations from simulation snapshots in which positions and magnitudes of objects are known. The parameters for simulating the observations can be adjusted to mimic telescopes of various sizes. COCOA also has a photometry pipeline that can use standalone versions of DAOPHOT (ascl:1104.011) and ALLSTAR to produce photometric catalogs for all observed stars.

  16. DPPP: Default Pre-Processing Pipeline

    NASA Astrophysics Data System (ADS)

    van Diepen, Ger; Dijkema, Tammo Jan

    2018-04-01

    DPPP (Default Pre-Processing Pipeline, also referred to as NDPPP) reads and writes radio-interferometric data in the form of Measurement Sets, mainly those that are created by the LOFAR telescope. It goes through visibilities in time order and contains standard operations like averaging, phase-shifting and flagging bad stations. Between the steps in a pipeline, the data is not written to disk, making this tool suitable for operations where I/O dominates. More advanced procedures such as gain calibration are also included. Other computing steps can be provided by loading a shared library; currently supported external steps are the AOFlagger (ascl:1010.017) and a bridge that enables loading python steps.

  17. Ascl1 (Mash1) Knockout Perturbs Differentiation of Nonneuronal Cells in Olfactory Epithelium

    PubMed Central

    Jang, Woochan; Wildner, Hendrik; Schwob, James E.

    2012-01-01

    The embryonic olfactory epithelium (OE) generates only a very few olfactory sensory neurons when the basic helix-loop-helix transcription factor, ASCL1 (previously known as MASH1) is eliminated by gene mutation. We have closely examined the structure and composition of the OE of knockout mice and found that the absence of neurons dramatically affects the differentiation of multiple other epithelial cell types as well. The most prominent effect is observed within the two known populations of stem and progenitor cells of the epithelium. The emergence of horizontal basal cells, a multipotent progenitor population in the adult epithelium, is anomalous in the Ascl1 knockout mice. The differentiation of globose basal cells, another multipotent progenitor population in the adult OE, is also aberrant. All of the persisting globose basal cells are marked by SOX2 expression, suggesting a prominent role for SOX2 in progenitors upstream of Ascl1. However, NOTCH1-expressing basal cells are absent from the knockout; since NOTCH1 signaling normally acts to suppress Ascl1 via HES1 and drives sustentacular (Sus) cell differentiation during adult epithelial regeneration, its absence suggests reciprocity between neurogenesis and the differentiation of Sus cells. Indeed, the Sus cells of the mutant mice express a markedly lower level of HES1, strengthening that notion of reciprocity. Duct/gland development appears normal. Finally, the expression of cKIT by basal cells is also undetectable, except in those small patches where neurogenesis escapes the effects of Ascl1 knockout and neurons are born. Thus, persistent neurogenic failure distorts the differentiation of multiple other cell types in the olfactory epithelium. PMID:23284756

  18. RM-CLEAN: RM spectra cleaner

    NASA Astrophysics Data System (ADS)

    Heald, George

    2017-08-01

    RM-CLEAN reads in dirty Q and U cubes, generates rmtf based on the frequencies given in an ASCII file, and cleans the RM spectra following the algorithm given by Brentjens (2007). The output cubes contain the clean model components and the CLEANed RM spectra. The input cubes must be reordered with mode=312, and the output cubes will have the same ordering and thus must be reordered after being written to disk. RM-CLEAN runs as a MIRIAD (ascl:1106.007) task and a Python wrapper is included with the code.

  19. xspec_emcee: XSPEC-friendly interface for the emcee package

    NASA Astrophysics Data System (ADS)

    Sanders, Jeremy

    2018-05-01

    XSPEC_EMCEE is an XSPEC-friendly interface for emcee (ascl:1303.002). It carries out MCMC analyses of X-ray spectra in the X-ray spectral fitting program XSPEC (ascl:9910.005). It can run multiple xspec processes simultaneously, speeding up the analysis, and can switch to parameterizing norm parameters in log space.

  20. The Afghan symptom checklist: a culturally grounded approach to mental health assessment in a conflict zone.

    PubMed

    Miller, Kenneth E; Omidian, Patricia; Quraishy, Abdul Samad; Quraishy, Naseema; Nasiry, Mohammed Nader; Nasiry, Seema; Karyar, Nazar Mohammed; Yaqubi, Abdul Aziz

    2006-10-01

    This article describes a methodology for developing culturally grounded assessment measures in conflict and postconflict situations. A mixed-method design was used in Kabul, Afghanistan, to identify local indicators of distress and develop the 22-item Afghan Symptom Checklist (ASCL). The ASCL contains several indigenous items and items familiar to Western mental health professionals. The ASCL was pilot tested and subsequently administered to 324 adults in 8 districts of Kabul. It demonstrated excellent reliability (alpha=.93) and good construct validity, correlating strongly with a measure of exposure to war-related violence and loss (r=.70). Results of the survey indicate moderate levels of distress among Afghan men and markedly higher levels of distress and impaired functioning among women (and widows in particular). (c) 2007 APA, all rights reserved

  1. Ascl1-induced neuronal differentiation of P19 cells requires expression of a specific inhibitor protein of cAMP-dependent protein kinase

    PubMed Central

    Huang, Holly S.; Turner, David L.; Thompson, Robert C.; Uhler, Michael D.

    2011-01-01

    cAMP-dependent protein kinase (PKA) plays a critical role in nervous system development by modulating sonic hedgehog and bone morphogenetic protein signaling. In the current studies, P19 embryonic carcinoma cells were neuronally differentiated by expression of the proneural basic helix-loop-helix transcription factor Ascl1. After expression of Ascl1, but prior to expression of neuronal markers such as microtubule associated protein 2 and neuronal β-tubulin, P19 cells demonstrated a large, transient increase in both mRNA and protein for the endogenous protein kinase inhibitor (PKI)β. PKIβ-targeted shRNA constructs both reduced the levels of PKIβ expression and blocked the neuronal differentiation of P19 cells. This inhibition of differentiation was rescued by transfection of a shRNA-resistant expression vector for the PKIβ protein, and this rescue required the PKA-specific inhibitory sequence of the PKIβprotein. PKIβ played a very specific role in the Ascl1-mediated differentiation process since other PKI isoforms were unable to rescue the deficit conferred by shRNA-mediated knockdown of PKIβ. Our results define a novel requirement for PKIβ and its inhibition of PKA during neuronal differentiation of P19 cells. PMID:21623794

  2. streamgap-pepper: Effects of peppering streams with many small impacts

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Erkal, Denis; Sanders, Jason

    2017-02-01

    streamgap-pepper computes the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. A line-of-parallel-angle approach is used to calculate the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. This code uses galpy (ascl:1411.008) and the streampepperdf.py galpy extension, which implements the fast calculation of the perturbed stream structure.

  3. Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

    NASA Astrophysics Data System (ADS)

    Kestener, Pierre

    2017-10-01

    RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

  4. clustep: Initial conditions for galaxy cluster halo simulations

    NASA Astrophysics Data System (ADS)

    Ruggiero, Rafael

    2017-11-01

    clustep generates a snapshot in GADGET-2 (ascl:0003.001) format containing a galaxy cluster halo in equilibrium; this snapshot can also be read in RAMSES (ascl:1011.007) using the DICE patch. The halo is made of a dark matter component and a gas component, with the latter representing the ICM. Each of these components follows a Dehnen density profile, with gamma=0 or gamma=1. If gamma=1, then the profile corresponds to a Hernquist profile.

  5. Proliferative and transcriptional identity of distinct classes of neural precursors in the mammalian olfactory epithelium.

    PubMed

    Tucker, Eric S; Lehtinen, Maria K; Maynard, Tom; Zirlinger, Mariela; Dulac, Catherine; Rawson, Nancy; Pevny, Larysa; Lamantia, Anthony-Samuel

    2010-08-01

    Neural precursors in the developing olfactory epithelium (OE) give rise to three major neuronal classes - olfactory receptor (ORNs), vomeronasal (VRNs) and gonadotropin releasing hormone (GnRH) neurons. Nevertheless, the molecular and proliferative identities of these precursors are largely unknown. We characterized two precursor classes in the olfactory epithelium (OE) shortly after it becomes a distinct tissue at midgestation in the mouse: slowly dividing self-renewing precursors that express Meis1/2 at high levels, and rapidly dividing neurogenic precursors that express high levels of Sox2 and Ascl1. Precursors expressing high levels of Meis genes primarily reside in the lateral OE, whereas precursors expressing high levels of Sox2 and Ascl1 primarily reside in the medial OE. Fgf8 maintains these expression signatures and proliferative identities. Using electroporation in the wild-type embryonic OE in vitro as well as Fgf8, Sox2 and Ascl1 mutant mice in vivo, we found that Sox2 dose and Meis1 - independent of Pbx co-factors - regulate Ascl1 expression and the transition from lateral to medial precursor state. Thus, we have identified proliferative characteristics and a dose-dependent transcriptional network that define distinct OE precursors: medial precursors that are most probably transit amplifying neurogenic progenitors for ORNs, VRNs and GnRH neurons, and lateral precursors that include multi-potent self-renewing OE neural stem cells.

  6. Proliferative and transcriptional identity of distinct classes of neural precursors in the mammalian olfactory epithelium

    PubMed Central

    Tucker, Eric S.; Lehtinen, Maria K.; Maynard, Tom; Zirlinger, Mariela; Dulac, Catherine; Rawson, Nancy; Pevny, Larysa; LaMantia, Anthony-Samuel

    2010-01-01

    Neural precursors in the developing olfactory epithelium (OE) give rise to three major neuronal classes – olfactory receptor (ORNs), vomeronasal (VRNs) and gonadotropin releasing hormone (GnRH) neurons. Nevertheless, the molecular and proliferative identities of these precursors are largely unknown. We characterized two precursor classes in the olfactory epithelium (OE) shortly after it becomes a distinct tissue at midgestation in the mouse: slowly dividing self-renewing precursors that express Meis1/2 at high levels, and rapidly dividing neurogenic precursors that express high levels of Sox2 and Ascl1. Precursors expressing high levels of Meis genes primarily reside in the lateral OE, whereas precursors expressing high levels of Sox2 and Ascl1 primarily reside in the medial OE. Fgf8 maintains these expression signatures and proliferative identities. Using electroporation in the wild-type embryonic OE in vitro as well as Fgf8, Sox2 and Ascl1 mutant mice in vivo, we found that Sox2 dose and Meis1 – independent of Pbx co-factors – regulate Ascl1 expression and the transition from lateral to medial precursor state. Thus, we have identified proliferative characteristics and a dose-dependent transcriptional network that define distinct OE precursors: medial precursors that are most probably transit amplifying neurogenic progenitors for ORNs, VRNs and GnRH neurons, and lateral precursors that include multi-potent self-renewing OE neural stem cells. PMID:20573694

  7. Epitaxial gallium arsenide wafers

    NASA Technical Reports Server (NTRS)

    Black, J. F.; Robinson, L. B.

    1971-01-01

    The preparation of GaAs epitaxial layers by a vapor transport process using AsCl3, Ga and H2 was pursued to provide epitaxial wafers suitable for the fabrication of transferred electron oscillators and amplifiers operating in the subcritical region. Both n-n(+) structures, and n(++)-n-n(+) sandwich structures were grown using n(+) (Si-doped) GaAs substrates. Process variables such as the input AsCl3 concentration, gallium temperature, and substrate temperature and temperature gradient and their effects on properties are presented and discussed.

  8. EGG: Empirical Galaxy Generator

    NASA Astrophysics Data System (ADS)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2018-04-01

    The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

  9. Immunohistochemical study of the neural development transcription factors (TTF1, ASCL1 and BRN2) in neuroendocrine prostate tumours.

    PubMed

    Rodríguez-Zarco, E; Vallejo-Benítez, A; Umbría-Jiménez, S; Pereira-Gallardo, S; Pabón-Carrasco, S; Azueta, A; González-Cámpora, R; Espinal, P S; García-Escudero, A

    2017-10-01

    Prostatic small-cell neuroendocrine carcinoma is an uncommon malignancy that constitutes 0.5-1% of all prostate malignancies. The median cancer-specific survival of patients with prostatic small-cell neuroendocrine carcinoma is 19 months, and 60.5% of the patients have metastatic disease. Neural development transcription factors are molecules involved in the organogenesis of the central nervous system and of neuroendocrine precursors of various tissues, including the suprarenal gland, thyroid glands, lungs and prostate. We present 3 cases of this uncommon condition, applying the new World Health Organisation criteria. We conducted studies through haematoxylin and eosin staining and analysed the expression of the neural development transcription factors achaete-scute homolog like 1, thyroid transcription factor 1 and the class III/IV POU transcription factors, as a new research line in the carcinogenesis of prostatic neuroendocrine tumours. In case 1, there was no TTF1 immunoexpression. Cases 2 and 3 had positive immunostaining for ASCL1, and Case 1 had negative immunostaining. BRN2 immunostaining was negative in case 1 and positive in cases 2 and 3. The World Health Organisation does not recognise any molecular or genetic marker with prognostic value. ASCL-1 is related to the NOTCH and WNT signalling pathways. ASCL-1, TTF1 and BRN2 could be used for early diagnosis and as prognostic factors and therapeutic targets. Copyright © 2017 AEU. All rights reserved.

  10. gPhoton: The GALEX Photon Data Archive

    NASA Astrophysics Data System (ADS)

    Million, Chase; Fleming, Scott W.; Shiao, Bernie; Seibert, Mark; Loyd, Parke; Tucker, Michael; Smith, Myron; Thompson, Randy; White, Richard L.

    2016-12-01

    gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database and to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.

  11. Induction of specific neuron types by overexpression of single transcription factors.

    PubMed

    Teratani-Ota, Yusuke; Yamamizu, Kohei; Piao, Yulan; Sharova, Lioudmila; Amano, Misa; Yu, Hong; Schlessinger, David; Ko, Minoru S H; Sharov, Alexei A

    2016-10-01

    Specific neuronal types derived from embryonic stem cells (ESCs) can facilitate mechanistic studies and potentially aid in regenerative medicine. Existing induction methods, however, mostly rely on the effects of the combined action of multiple added growth factors, which generally tend to result in mixed populations of neurons. Here, we report that overexpression of specific transcription factors (TFs) in ESCs can rather guide the differentiation of ESCs towards specific neuron lineages. Analysis of data on gene expression changes 2 d after induction of each of 185 TFs implicated candidate TFs for further ESC differentiation studies. Induction of 23 TFs (out of 49 TFs tested) for 6 d facilitated neural differentiation of ESCs as inferred from increased proportion of cells with neural progenitor marker PSA-NCAM. We identified early activation of the Notch signaling pathway as a common feature of most potent inducers of neural differentiation. The majority of neuron-like cells generated by induction of Ascl1, Smad7, Nr2f1, Dlx2, Dlx4, Nr2f2, Barhl2, and Lhx1 were GABA-positive and expressed other markers of GABAergic neurons. In the same way, we identified Lmx1a and Nr4a2 as inducers for neurons bearing dopaminergic markers and Isl1, Fezf2, and St18 for cholinergic motor neurons. A time-course experiment with induction of Ascl1 showed early upregulation of most neural-specific messenger RNA (mRNA) and microRNAs (miRNAs). Sets of Ascl1-induced mRNAs and miRNAs were enriched in Ascl1 targets. In further studies, enrichment of cells obtained with the induction of Ascl1, Smad7, and Nr2f1 using microbeads resulted in essentially pure population of neuron-like cells with expression profiles similar to neural tissues and expressed markers of GABAergic neurons. In summary, this study indicates that induction of transcription factors is a promising approach to generate cultures that show the transcription profiles characteristic of specific neural cell types.

  12. PROM7: 1D modeler of solar filaments or prominences

    NASA Astrophysics Data System (ADS)

    Gouttebroze, P.

    2018-05-01

    PROM7 is an update of PROM4 (ascl:1306.004) and computes simple models of solar prominences and filaments using Partial Radiative Distribution (PRD). The models consist of plane-parallel slabs standing vertically above the solar surface. Each model is defined by 5 parameters: temperature, density, geometrical thickness, microturbulent velocity and height above the solar surface. It solves the equations of radiative transfer, statistical equilibrium, ionization and pressure equilibria, and computes electron and hydrogen level population and hydrogen line profiles. Moreover, the code treats calcium atom which is reduced to 3 ionization states (Ca I, Ca II, CA III). Ca II ion has 5 levels which are useful for computing 2 resonance lines (H and K) and infrared triplet (to 8500 A).

  13. Genotoxicity surveillance programme in workers dismantling World War I chemical ammunition.

    PubMed

    Mateuca, R A; Carton, C; Roelants, M; Roesems, S; Lison, D; Kirsch-Volders, M

    2010-06-01

    To evaluate the effectiveness of personal protective measures in a dismantling plant for chemical weapons from World War I of the Belgian Defence. Seventeen NIOSH level B-equipped plant workers exposed to arsenic trichloride (AsCl(3)) in combination with phosgene or hydrogen cyanide (HCN) were compared to 24 NIOSH level C-protected field workers occasionally exposed to genotoxic chemicals (including AsCl(3)-phosgene/HCN) when collecting chemical ammunition, and 19 matched referents. Chromosomal aberrations (CA), micronuclei (MNCB and MNMC), sister chromatid exchanges (SCE) and high frequency cells (HFC) were analysed in peripheral blood lymphocytes. Urinary arsenic levels and genetic polymorphisms in major DNA repair enzymes (hOGG1(326), XRCC1(399), XRCC3(241)) were also assessed. SCE and HFC levels were significantly higher in plant-exposed versus referent subjects, but MNCB and MNMC were not different. MNCB, SCE and HFC levels were significantly higher and MNMC levels significantly lower in field-exposed workers versus referents. AsCl(3) exposure was not correlated with genotoxicity biomarkers. Protective measures for plant-exposed workers appear adequate, but protection for field-exposed individuals could be improved.

  14. MPI_XSTAR: MPI-based parallelization of XSTAR program

    NASA Astrophysics Data System (ADS)

    Danehkar, A.

    2017-12-01

    MPI_XSTAR parallelizes execution of multiple XSTAR runs using Message Passing Interface (MPI). XSTAR (ascl:9910.008), part of the HEASARC's HEAsoft (ascl:1408.004) package, calculates the physical conditions and emission spectra of ionized gases. MPI_XSTAR invokes XSTINITABLE from HEASoft to generate a job list of XSTAR commands for given physical parameters. The job list is used to make directories in ascending order, where each individual XSTAR is spawned on each processor and outputs are saved. HEASoft's XSTAR2TABLE program is invoked upon the contents of each directory in order to produce table model FITS files for spectroscopy analysis tools.

  15. Exercise in Adulthood after Irradiation of the Juvenile Brain Ameliorates Long-Term Depletion of Oligodendroglial Cells.

    PubMed

    Bull, Cecilia; Cooper, Christiana; Lindahl, Veronica; Fitting, Sylvia; Persson, Anders I; Grandér, Rita; Alborn, Ann-Marie; Björk-Eriksson, Thomas; Kuhn, H Georg; Blomgren, Klas

    2017-10-01

    Cranial radiation severely affects brain health and function, including glial cell production and myelination. Recent studies indicate that voluntary exercise has beneficial effects on oligodendrogenesis and myelination. Here, we hypothesized that voluntary running would increase oligodendrocyte numbers in the corpus callosum after irradiation of the juvenile mouse brain. The brains of C57Bl/6J male mice were 6 Gy irradiated on postnatal day 9 during the main gliogenic developmental phase, resulting in a loss of oligodendrocyte precursor cells. Upon adulthood, the mice were injected with bromodeoxyuridine and allowed to exercise on a running wheel for four weeks. Cell proliferation and survival, Ascl1 + oligodendrocyte precursor and Olig2 + oligodendrocyte cell numbers as well as CC1 + mature oligodendrocytes were quantified using immunohistology. Radiation induced a reduction in the number of Olig2 + oligodendrocytes by nearly 50% without affecting production or survival of new Olig2 + cells. Ascl1 + cells earlier in the oligodendroglial cell lineage were also profoundly affected, with numbers reduced by half. By three weeks of age, Olig2 + cell numbers had not recovered, and this was paralleled by a volumetric loss in the corpus callosum. The deficiency of Olig2 + oligodendrocytes persisted into adulthood. Additionally, the depletion of Ascl1 + progenitor cells was irreversible, and was even more pronounced at 12 weeks postirradiation compared to day 2 postirradiation. Furthermore, the overall number of CC1 + mature oligodendrocytes decreased by 28%. The depletion of Olig2 + cells in irradiated animals was reversed by 4 weeks of voluntary exercise. Moreover, voluntary exercise also increased the number of Ascl1 + progenitor cells in irradiated animals. Taken together, these results demonstrate that exercise in adulthood significantly ameliorates the profound and long-lasting effects of moderate exposure to immature oligodendrocytes during postnatal development.

  16. gPhoton: THE GALEX PHOTON DATA ARCHIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Million, Chase; Fleming, Scott W.; Shiao, Bernie

    gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database andmore » to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.« less

  17. California Library Laws. 1977.

    ERIC Educational Resources Information Center

    Silver, Cy H.

    This document contains selections from the California Administrative Code, Education Code, Government Code, and others relating to public libraries, county law libraries and the State Library. The first section presents legal developments in California from 1974 to 1976 which are of interest to librarians. Laws and regulations are presented under…

  18. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  19. IMAGINE: Interstellar MAGnetic field INference Engine

    NASA Astrophysics Data System (ADS)

    Steininger, Theo

    2018-03-01

    IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.

  20. fd3: Spectral disentangling of double-lined spectroscopic binary stars

    NASA Astrophysics Data System (ADS)

    Ilijić, Saša

    2017-05-01

    The spectral disentangling technique can be applied on a time series of observed spectra of a spectroscopic double-lined binary star (SB2) to determine the parameters of orbit and reconstruct the spectra of component stars, without the use of template spectra. fd3 disentangles the spectra of SB2 stars, capable also of resolving the possible third companion. It performs the separation of spectra in the Fourier space which is faster, but in several respects less versatile than the wavelength-space separation. (Wavelength-space separation is implemented in the twin code CRES.) fd3 is written in C and is designed as a command-line utility for a Unix-like operating system. fd3 is a new version of FDBinary (ascl:1705.011), which is now deprecated.

  1. The Library Systems Act and Rules for Administering the Library Systems Act.

    ERIC Educational Resources Information Center

    Texas State Library, Austin. Library Development Div.

    This document contains the Texas Library Systems Act and rules for administering the Library Systems Act. Specifically, it includes the following documents: Texas Library Systems Act; Summary of Codes;Texas Administrative Code: Service Complaints and Protest Procedure; Criteria For Texas Library System Membership; and Certification Requirements…

  2. Software Library: A Reusable Software Issue.

    DTIC Science & Technology

    1984-06-01

    On reverse aide it neceeary aid Identify by block number) Software Library; Program Library; Reusability; Generator 20 ABSTRACT (Cmlnue on revere... Software Library. A particular example of the Software Library, the Program Library, is described as a prototype of a reusable library. A hierarchical... programming libraries are described. Finally, non code products in the Software Library are discussed. Accesson Fo NTIS R~jS DrrC TA Availability Codes 0

  3. CSlib, a library to couple codes via Client/Server messaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plimpton, Steve

    The CSlib is a small, portable library which enables two (or more) independent simulation codes to be coupled, by exchanging messages with each other. Both codes link to the library when they are built, and can them communicate with each other as they run. The messages contain data or instructions that the two codes send back-and-forth to each other. The messaging can take place via files, sockets, or MPI. The latter is a standard distributed-memory message-passing library.

  4. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  5. Idaho Library Laws, 1996-1997. Full Edition.

    ERIC Educational Resources Information Center

    Idaho State Library, Boise.

    This new edition of the "Idaho Library Laws" contains changes through the 1996 legislative session and includes "Idaho Code" sections that legally affect city, school-community or district libraries, or the Idaho State Library. These sections include the basic library laws in "Idaho Code" Title 33, Chapters 25, 26,…

  6. Generation of induced neurons by direct reprogramming in the mammalian cochlea.

    PubMed

    Nishimura, K; Weichert, R M; Liu, W; Davis, R L; Dabdoub, A

    2014-09-05

    Primary auditory neurons (ANs) in the mammalian cochlea play a critical role in hearing as they transmit auditory information in the form of electrical signals from mechanosensory cochlear hair cells in the inner ear to the brainstem. Their progressive degeneration is associated with disease conditions, excessive noise exposure and aging. Replacement of ANs, which lack the ability to regenerate spontaneously, would have a significant impact on research and advancement in cochlear implants in addition to the amelioration of hearing impairment. The aim of this study was to induce a neuronal phenotype in endogenous non-neural cells in the cochlea, which is the essential organ of hearing. Overexpression of a neurogenic basic helix-loop-helix transcription factor, Ascl1, in the cochlear non-sensory epithelial cells induced neurons at high efficiency at embryonic, postnatal and juvenile stages. Moreover, induced neurons showed typical properties of neuron morphology, gene expression and electrophysiology. Our data indicate that Ascl1 alone or Ascl1 and NeuroD1 is sufficient to reprogram cochlear non-sensory epithelial cells into functional neurons. Generation of neurons from non-neural cells in the cochlea is an important step for the regeneration of ANs in the mature mammalian cochlea. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Functional characterization and expression of GASCL1 and GASCL2, two anther-specific chalcone synthase like enzymes from Gerbera hybrida.

    PubMed

    Kontturi, Juha; Osama, Raisa; Deng, Xianbao; Bashandy, Hany; Albert, Victor A; Teeri, Teemu H

    2017-02-01

    The chalcone synthase superfamily consists of type III polyketidesynthases (PKSs), enzymes responsible for producing plant secondary metabolites with various biological and pharmacological activities. Anther-specific chalcone synthase-like enzymes (ASCLs) represent an ancient group of type III PKSs involved in the biosynthesis of sporopollenin, the main component of the exine layer of moss spores and mature pollen grains of seed plants. In the latter, ASCL proteins are localized in the tapetal cells of the anther where they participate in sporopollenin biosynthesis and exine formation within the locule. It is thought that the enzymes responsible for sporopollenin biosynthesis are highly conserved, and thus far, each angiosperm species with a genome sequenced has possessed two ASCL genes, which in Arabidopsis thaliana are PKSA and PKSB. The Gerbera hybrida (gerbera) PKS protein family consists of three chalcone synthases (GCHS1, GCHS3 and GCHS4) and three 2-pyrone synthases (G2PS1, G2PS2 and G2PS3). In previous studies we have demonstrated the functions of chalcone synthases in flavonoid biosynthesis, and the involvement of 2-pyrone synthases in the biosynthesis of antimicrobial compounds found in gerbera. In this study we expanded the gerbera PKS-family by functionally characterizing two gerbera ASCL proteins. In vitro enzymatic studies using purified recombinant proteins showed that both GASCL1 and GASCL2 were able to use medium and long-chain acyl-CoA starters and perform two to three condensation reactions of malonyl-CoA to produce tri- and tetraketide 2-pyrones, usually referred to as alpha-pyrones in sporopollenin literature. Both GASCL1 and GASCL2 genes were expressed only in floral organs, with most expression observed in anthers. In the anthers, transcripts of both genes showed strict tapetum-specific localization. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Progenitor potential of nkx6.1-expressing cells throughout zebrafish life and during beta cell regeneration.

    PubMed

    Ghaye, Aurélie P; Bergemann, David; Tarifeño-Saldivia, Estefania; Flasse, Lydie C; Von Berg, Virginie; Peers, Bernard; Voz, Marianne L; Manfroid, Isabelle

    2015-09-02

    In contrast to mammals, the zebrafish has the remarkable capacity to regenerate its pancreatic beta cells very efficiently. Understanding the mechanisms of regeneration in the zebrafish and the differences with mammals will be fundamental to discovering molecules able to stimulate the regeneration process in mammals. To identify the pancreatic cells able to give rise to new beta cells in the zebrafish, we generated new transgenic lines allowing the tracing of multipotent pancreatic progenitors and endocrine precursors. Using novel bacterial artificial chromosome transgenic nkx6.1 and ascl1b reporter lines, we established that nkx6.1-positive cells give rise to all the pancreatic cell types and ascl1b-positive cells give rise to all the endocrine cell types in the zebrafish embryo. These two genes are initially co-expressed in the pancreatic primordium and their domains segregate, not as a result of mutual repression, but through the opposite effects of Notch signaling, maintaining nkx6.1 expression while repressing ascl1b in progenitors. In the adult zebrafish, nkx6.1 expression persists exclusively in the ductal tree at the tip of which its expression coincides with Notch active signaling in centroacinar/terminal end duct cells. Tracing these cells reveals that they are able to differentiate into other ductal cells and into insulin-expressing cells in normal (non-diabetic) animals. This capacity of ductal cells to generate endocrine cells is supported by the detection of ascl1b in the nkx6.1:GFP ductal cell transcriptome. This transcriptome also reveals, besides actors of the Notch and Wnt pathways, several novel markers such as id2a. Finally, we show that beta cell ablation in the adult zebrafish triggers proliferation of ductal cells and their differentiation into insulin-expressing cells. We have shown that, in the zebrafish embryo, nkx6.1+ cells are bona fide multipotent pancreatic progenitors, while ascl1b+ cells represent committed endocrine precursors. In contrast to the mouse, pancreatic progenitor markers nkx6.1 and pdx1 continue to be expressed in adult ductal cells, a subset of which we show are still able to proliferate and undergo ductal and endocrine differentiation, providing robust evidence of the existence of pancreatic progenitor/stem cells in the adult zebrafish. Our findings support the hypothesis that nkx6.1+ pancreatic progenitors contribute to beta cell regeneration. Further characterization of these cells will open up new perspectives for anti-diabetic therapies.

  9. Idaho Library Laws, 1999-2000. Full Edition.

    ERIC Educational Resources Information Center

    Idaho State Library, Boise.

    This new edition of the Idaho Library Laws contains changes through the 1998 legislative session and includes Idaho Code sections that legally affect city, school-community or district libraries, or the Idaho State Library. These sections include the basic library laws in Idaho Code Title 33, Chapters 25, 26, and 27, additional sections of the law…

  10. An introduction to QR Codes: linking libraries and mobile patrons.

    PubMed

    Hoy, Matthew B

    2011-01-01

    QR codes, or "Quick Response" codes, are two-dimensional barcodes that can be scanned by mobile smartphone cameras. These codes can be used to provide fast access to URLs, telephone numbers, and short passages of text. With the rapid adoption of smartphones, librarians are able to use QR codes to promote services and help library users find materials quickly and independently. This article will explain what QR codes are, discuss how they can be used in the library, and describe issues surrounding their use. A list of resources for generating and scanning QR codes is also provided.

  11. SMURF: SubMillimeter User Reduction Facility

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Chapin, Edward L.; Berry, David S.; Gibb, Andy G.; Tilanus, Remo P. J.; Balfour, Jennifer; Tilanus, Vincent; Currie, Malcolm J.

    2013-10-01

    SMURF reduces submillimeter single-dish continuum and heterodyne data. It is mainly targeted at data produced by the James Clerk Maxwell Telescope but data from other telescopes have been reduced using the package. SMURF is released as part of the bundle that comprises Starlink (ascl:1110.012) and most of the packages that use it. The two key commands are MAKEMAP for the creation of maps from sub millimeter continuum data and MAKECUBE for the creation of data cubes from heterodyne array instruments. The software can also convert data from legacy JCMT file formats to the modern form to allow it to be processed by MAKECUBE. SMURF is a core component of the ORAC-DR (ascl:1310.001) data reduction pipeline for JCMT.

  12. Rethinking mobile delivery: using Quick Response codes to access information at the point of need.

    PubMed

    Lombardo, Nancy T; Morrow, Anne; Le Ber, Jeanne

    2012-01-01

    This article covers the use of Quick Response (QR) codes to provide instant mobile access to information, digital collections, educational offerings, library website, subject guides, text messages, videos, and library personnel. The array of uses and the value of using QR codes to push customized information to patrons are explained. A case is developed for using QR codes for mobile delivery of customized information to patrons. Applications in use at the Libraries of the University of Utah will be reviewed to provide readers with ideas for use in their library. Copyright © Taylor & Francis Group, LLC

  13. Conversion of Fibroblasts to Parvalbumin Neurons by One Transcription Factor, Ascl1, and the Chemical Compound Forskolin*

    PubMed Central

    Shi, Zixiao; Zhang, Juan; Chen, Shuangquan; Li, Yanxin; Lei, Xuepei; Qiao, Huimin; Zhu, Qianwen; Hu, Baoyang; Zhou, Qi; Jiao, Jianwei

    2016-01-01

    Abnormalities in parvalbumin (PV)-expressing interneurons cause neurodevelopmental disorders such as epilepsy, autism, and schizophrenia. Unlike other types of neurons that can be efficiently differentiated from pluripotent stem cells, PV neurons were minimally generated using a conventional differentiation strategy. In this study we developed an adenovirus-based transdifferentiation strategy that incorporates an additional chemical compound for the efficient generation of induced PV (iPV) neurons. The chemical compound forskolin combined with Ascl1 induced ∼80% of mouse fibroblasts to iPV neurons. The iPV neurons generated by this procedure matured 5–7 days post infection and were characterized by electrophysiological properties and known neuronal markers, such as PV and GABA. Our studies, therefore, identified an efficient approach for generating PV neurons. PMID:27137935

  14. Evaluation of the DRAGON code for VHTR design analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division

    2006-01-12

    This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less

  15. Reliable Genetic Labeling of Adult-Born Dentate Granule Cells Using Ascl1 CreERT2 and Glast CreERT2 Murine Lines.

    PubMed

    Yang, Sung M; Alvarez, Diego D; Schinder, Alejandro F

    2015-11-18

    Newly generated dentate granule cells (GCs) are relevant for input discrimination in the adult hippocampus. Yet, their precise contribution to information processing remains unclear. To address this question, it is essential to develop approaches to precisely label entire cohorts of adult-born GCs. In this work, we used genetically modified mice to allow conditional expression of tdTomato (Tom) in adult-born GCs and characterized their development and functional integration. Ascl1(CreERT2);CAG(floxStopTom) and Glast(CreERT2);CAG(floxStopTom) mice resulted in indelible expression of Tom in adult neural stem cells and their lineage upon tamoxifen induction. Whole-cell recordings were performed to measure intrinsic excitability, firing behavior, and afferent excitatory connectivity. Developing GCs were also staged by the expression of early and late neuronal markers. The slow development of adult-born GCs characterized here is consistent with previous reports using retroviral approaches that have revealed that a mature phenotype is typically achieved after 6-8 weeks. Our findings demonstrate that Ascl1(CreERT2) and Glast(CreERT2) mouse lines enable simple and reliable labeling of adult-born GC lineages within restricted time windows. Therefore, these mice greatly facilitate tagging new neurons and manipulating their activity, required for understanding adult neurogenesis in the context of network remodeling, learning, and behavior. Our study shows that Ascl1(CreERT2) and Glast(CreERT2) mice lines can be used to label large cohorts of adult-born dentate granule cells with excellent time resolution. Neurons labeled in this manner display developmental and functional profiles that are in full agreement with previous findings using thymidine analogs and retroviral labeling, thus providing an alternative approach to tackle fundamental questions on circuit remodeling. Because of the massive neuronal targeting and the simplicity of this method, genetic labeling will contribute to expand research on adult neurogenesis. Copyright © 2015 the authors 0270-6474/15/3515379-12$15.00/0.

  16. A Working Model for the System Alumina-Magnesia.

    DTIC Science & Technology

    1983-05-01

    Several regions in the resulting diagram appear rather uncertain: the liquidus ’National bureau of StandaTds. JANAF Thermochemical Tables, by D. R. Stull ...Code 131) 1 Naval Ordnance Station, Indian Head (Technical Library) 29 Naval Postgraduate School. Monterey Code 012, Dean of Research (1) Code 06... Dean of Science and Engineering (1) Code 1424. Library - Technical Reports (2) Code 33. Weapons Engineering Program Office (1) Code 61. Chairman

  17. CHEMICAL EVOLUTION LIBRARY FOR GALAXY FORMATION SIMULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saitoh, Takayuki R., E-mail: saitoh@elsi.jp

    We have developed a software library for chemical evolution simulations of galaxy formation under the simple stellar population (SSP) approximation. In this library, all of the necessary components concerning chemical evolution, such as initial mass functions, stellar lifetimes, yields from Type II and Type Ia supernovae, asymptotic giant branch stars, and neutron star mergers, are compiled from the literature. Various models are pre-implemented in this library so that users can choose their favorite combination of models. Subroutines of this library return released energy and masses of individual elements depending on a given event type. Since the redistribution manner of thesemore » quantities depends on the implementation of users’ simulation codes, this library leaves it up to the simulation code. As demonstrations, we carry out both one-zone, closed-box simulations and 3D simulations of a collapsing gas and dark matter system using this library. In these simulations, we can easily compare the impact of individual models on the chemical evolution of galaxies, just by changing the control flags and parameters of the library. Since this library only deals with the part of chemical evolution under the SSP approximation, any simulation codes that use the SSP approximation—namely, particle-base and mesh codes, as well as semianalytical models—can use it. This library is named “CELib” after the term “Chemical Evolution Library” and is made available to the community.« less

  18. Deterministic transfection drives efficient nonviral reprogramming and uncovers reprogramming barriers.

    PubMed

    Gallego-Perez, Daniel; Otero, Jose J; Czeisler, Catherine; Ma, Junyu; Ortiz, Cristina; Gygli, Patrick; Catacutan, Fay Patsy; Gokozan, Hamza Numan; Cowgill, Aaron; Sherwood, Thomas; Ghatak, Subhadip; Malkoc, Veysi; Zhao, Xi; Liao, Wei-Ching; Gnyawali, Surya; Wang, Xinmei; Adler, Andrew F; Leong, Kam; Wulff, Brian; Wilgus, Traci A; Askwith, Candice; Khanna, Savita; Rink, Cameron; Sen, Chandan K; Lee, L James

    2016-02-01

    Safety concerns and/or the stochastic nature of current transduction approaches have hampered nuclear reprogramming's clinical translation. We report a novel non-viral nanotechnology-based platform permitting deterministic large-scale transfection with single-cell resolution. The superior capabilities of our technology are demonstrated by modification of the well-established direct neuronal reprogramming paradigm using overexpression of the transcription factors Brn2, Ascl1, and Myt1l (BAM). Reprogramming efficiencies were comparable to viral methodologies (up to ~9-12%) without the constraints of capsid size and with the ability to control plasmid dosage, in addition to showing superior performance relative to existing non-viral methods. Furthermore, increased neuronal complexity could be tailored by varying BAM ratio and by including additional proneural genes to the BAM cocktail. Furthermore, high-throughput NEP allowed easy interrogation of the reprogramming process. We discovered that BAM-mediated reprogramming is regulated by AsclI dosage, the S-phase cyclin CCNA2, and that some induced neurons passed through a nestin-positive cell stage. In the field of regenerative medicine, the ability to direct cell fate by nuclear reprogramming is an important facet in terms of clinical application. In this article, the authors described their novel technique of cell reprogramming through overexpression of the transcription factors Brn2, Ascl1, and Myt1l (BAM) by in situ electroporation through nanochannels. This new technique could provide a platform for further future designs. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. QR Codes as Finding Aides: Linking Electronic and Print Library Resources

    ERIC Educational Resources Information Center

    Kane, Danielle; Schneidewind, Jeff

    2011-01-01

    As part of a focused, methodical, and evaluative approach to emerging technologies, QR codes are one of many new technologies being used by the UC Irvine Libraries. QR codes provide simple connections between print and virtual resources. In summer 2010, a small task force began to investigate how QR codes could be used to provide information and…

  20. Conversion of Fibroblasts to Parvalbumin Neurons by One Transcription Factor, Ascl1, and the Chemical Compound Forskolin.

    PubMed

    Shi, Zixiao; Zhang, Juan; Chen, Shuangquan; Li, Yanxin; Lei, Xuepei; Qiao, Huimin; Zhu, Qianwen; Hu, Baoyang; Zhou, Qi; Jiao, Jianwei

    2016-06-24

    Abnormalities in parvalbumin (PV)-expressing interneurons cause neurodevelopmental disorders such as epilepsy, autism, and schizophrenia. Unlike other types of neurons that can be efficiently differentiated from pluripotent stem cells, PV neurons were minimally generated using a conventional differentiation strategy. In this study we developed an adenovirus-based transdifferentiation strategy that incorporates an additional chemical compound for the efficient generation of induced PV (iPV) neurons. The chemical compound forskolin combined with Ascl1 induced ∼80% of mouse fibroblasts to iPV neurons. The iPV neurons generated by this procedure matured 5-7 days post infection and were characterized by electrophysiological properties and known neuronal markers, such as PV and GABA. Our studies, therefore, identified an efficient approach for generating PV neurons. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  1. Speciation of volatile arsenic at geothermal features in Yellowstone National Park

    USGS Publications Warehouse

    Planer-Friedrich, B.; Lehr, C.; Matschullat, J.; Merkel, B.J.; Nordstrom, D. Kirk; Sandstrom, M.W.

    2006-01-01

    Geothermal features in the Yellowstone National Park contain up to several milligram per liter of aqueous arsenic. Part of this arsenic is volatilized and released into the atmosphere. Total volatile arsenic concentrations of 0.5-200 mg/m3 at the surface of the hot springs were found to exceed the previously assumed nanogram per cubic meter range of background concentrations by orders of magnitude. Speciation of the volatile arsenic was performed using solid-phase micro-extraction fibers with analysis by GC-MS. The arsenic species most frequently identified in the samples is (CH3)2AsCl, followed by (CH3)3As, (CH3)2AsSCH3, and CH3AsCl2 in decreasing order of frequency. This report contains the first documented occurrence of chloro- and thioarsines in a natural environment. Toxicity, mobility, and degradation products are unknown. ?? 2006 Elsevier Inc. All rights reserved.

  2. Carboxylation of cytosine (5caC) in the CG dinucleotide in the E-box motif (CGCAG|GTG) increases binding of the Tcf3|Ascl1 helix-loop-helix heterodimer 10-fold.

    PubMed

    Golla, Jaya Prakash; Zhao, Jianfei; Mann, Ishminder K; Sayeed, Syed K; Mandal, Ajeet; Rose, Robert B; Vinson, Charles

    2014-06-27

    Three oxidative products of 5-methylcytosine (5mC) occur in mammalian genomes. We evaluated if these cytosine modifications in a CG dinucleotide altered DNA binding of four B-HLH homodimers and three heterodimers to the E-Box motif CGCAG|GTG. We examined 25 DNA probes containing all combinations of cytosine in a CG dinucleotide and none changed binding except for carboxylation of cytosine (5caC) in the strand CGCAG|GTG. 5caC enhanced binding of all examined B-HLH homodimers and heterodimers, particularly the Tcf3|Ascl1 heterodimer which increased binding ~10-fold. These results highlight a potential function of the oxidative products of 5mC, changing the DNA binding of sequence-specific transcription factors. Published by Elsevier Inc.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert; Israel, Daniel M.; Doebling, Scott William

    For code verification, one compares the code output against known exact solutions. There are many standard test problems used in this capacity, such as the Noh and Sedov problems. ExactPack is a utility that integrates many of these exact solution codes into a common API (application program interface), and can be used as a stand-alone code or as a python package. ExactPack consists of python driver scripts that access a library of exact solutions written in Fortran or Python. The spatial profiles of the relevant physical quantities, such as the density, fluid velocity, sound speed, or internal energy, are returnedmore » at a time specified by the user. The solution profiles can be viewed and examined by a command line interface or a graphical user interface, and a number of analysis tools and unit tests are also provided. We have documented the physics of each problem in the solution library, and provided complete documentation on how to extend the library to include additional exact solutions. ExactPack’s code architecture makes it easy to extend the solution-code library to include additional exact solutions in a robust, reliable, and maintainable manner.« less

  4. Field Encapsulation Library The FEL 2.2 User Guide

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Henze, Chris; Ellsworth, David

    1999-01-01

    This document describes version 2.2 of the Field Encapsulation Library (FEL), a library of mesh and field classes. FEL is a library for programmers - it is a "building block" enabling the rapid development of applications by a user. Since FEL is a library intended for code development, it is essential that enough technical detail be provided so that one can make full use of the code. Providing such detail requires some assumptions with respect to the reader's familiarity with the library implementation language, C++, particularly C++ with templates. We have done our best to make the explanations accessible to those who may not be completely C++ literate. Nevertheless, familiarity with the language will certainly help one's understanding of how and why things work the way they do. One consolation is that the level of understanding essential for using the library is significantly less than the level that one should have in order to modify or extend the library. One more remark on C++ templates: Templates have been a source of both joy and frustration for us. The frustration stems from the lack of mature or complete implementations that one has to work with. Template problems rear their ugly head particularly when porting. When porting C code, successfully compiling to a set of object files typically means that one is almost done. With templated C++ and the current state of the compilers and linkers, generating the object files is often only the beginning of the fun. On the other hand, templates are quite powerful. Used judiciously, templates enable more succinct designs and more efficient code. Templates also help with code maintenance. Designers can avoid creating objects that are the same in many respects, but not exactly the same. For example, FEL fields are templated by node type, thus the code for scalar fields and vector fields is shared. Furthermore, node type templating allows the library user to instantiate fields with data types not provided by the FEL authors. This type of flexibility would be difficult to offer without the support of the language. For users who may be having template-related problems, we offer the consolation that support for C++ templates is destined to improve with time. Efforts such as the Standard Template Library (STL) will inevitably drive vendors to provide more thorough, optimized tools for template code development. Furthermore, the benefits will become harder to resist for those who currently subscribe to the least-common-denominator "code it all in C" strategy. May FEL bring you both increased productivity and aesthetic satisfaction.

  5. Influence of elevated-CRP level-related polymorphisms in non-rheumatic Caucasians on the risk of subclinical atherosclerosis and cardiovascular disease in rheumatoid arthritis.

    PubMed

    López-Mejías, Raquel; Genre, Fernanda; Remuzgo-Martínez, Sara; González-Juanatey, Carlos; Robustillo-Villarino, Montserrat; Llorca, Javier; Corrales, Alfonso; Vicente, Esther; Miranda-Filloy, José A; Magro, César; Tejera-Segura, Beatriz; Ramírez Huaranga, Marco A; Pina, Trinitario; Blanco, Ricardo; Alegre-Sancho, Juan J; Raya, Enrique; Mijares, Verónica; Ubilla, Begoña; Mínguez Sánchez, María D; Gómez-Vaquero, Carmen; Balsa, Alejandro; Pascual-Salcedo, Dora; López-Longo, Francisco J; Carreira, Patricia; González-Álvaro, Isidoro; Rodríguez-Rodríguez, Luis; Fernández-Gutiérrez, Benjamín; Ferraz-Amaro, Iván; Castañeda, Santos; Martín, Javier; González-Gay, Miguel A

    2016-08-18

    Association between elevated C-reactive protein (CRP) serum levels and subclinical atherosclerosis and cardiovascular (CV) events was described in rheumatoid arthritis (RA). CRP, HNF1A, LEPR, GCKR, NLRP3, IL1F10, PPP1R3B, ASCL1, HNF4A and SALL1 exert an influence on elevated CRP serum levels in non-rheumatic Caucasians. Consequently, we evaluated the potential role of these genes in the development of CV events and subclinical atherosclerosis in RA patients. Three tag CRP polymorphisms and HNF1A, LEPR, GCKR, NLRP3, IL1F10, PPP1R3B, ASCL1, HNF4A and SALL1 were genotyped in 2,313 Spanish patients by TaqMan. Subclinical atherosclerosis was determined in 1,298 of them by carotid ultrasonography (by assessment of carotid intima-media thickness-cIMT-and presence/absence of carotid plaques). CRP serum levels at diagnosis and at the time of carotid ultrasonography were measured in 1,662 and 1,193 patients, respectively, by immunoturbidimetry. Interestingly, a relationship between CRP and CRP serum levels at diagnosis and at the time of the carotid ultrasonography was disclosed. However, no statistically significant differences were found when CRP, HNF1A, LEPR, GCKR, NLRP3, IL1F10, PPP1R3B, ASCL1, HNF4A and SALL1 were evaluated according to the presence/absence of CV events, carotid plaques and cIMT after adjustment. Our results do not confirm an association between these genes and CV disease in RA.

  6. The National Transport Code Collaboration Module Library

    NASA Astrophysics Data System (ADS)

    Kritz, A. H.; Bateman, G.; Kinsey, J.; Pankin, A.; Onjun, T.; Redd, A.; McCune, D.; Ludescher, C.; Pletzer, A.; Andre, R.; Zakharov, L.; Lodestro, L.; Pearlstein, L. D.; Jong, R.; Houlberg, W.; Strand, P.; Wiley, J.; Valanju, P.; John, H. St.; Waltz, R.; Mandrekas, J.; Mau, T. K.; Carlsson, J.; Braams, B.

    2004-12-01

    This paper reports on the progress in developing a library of code modules under the auspices of the National Transport Code Collaboration (NTCC). Code modules are high quality, fully documented software packages with a clearly defined interface. The modules provide a variety of functions, such as implementing numerical physics models; performing ancillary functions such as I/O or graphics; or providing tools for dealing with common issues in scientific programming such as portability of Fortran codes. Researchers in the plasma community submit code modules, and a review procedure is followed to insure adherence to programming and documentation standards. The review process is designed to provide added confidence with regard to the use of the modules and to allow users and independent reviews to validate the claims of the modules' authors. All modules include source code; clear instructions for compilation of binaries on a variety of target architectures; and test cases with well-documented input and output. All the NTCC modules and ancillary information, such as current standards and documentation, are available from the NTCC Module Library Website http://w3.pppl.gov/NTCC. The goal of the project is to develop a resource of value to builders of integrated modeling codes and to plasma physics researchers generally. Currently, there are more than 40 modules in the module library.

  7. Comparing the validity of the self reporting questionnaire and the Afghan symptom checklist: dysphoria, aggression, and gender in transcultural assessment of mental health

    PubMed Central

    2014-01-01

    Background The relative performance of local and international assessment instruments is subject to ongoing discussion in transcultural research on mental health and psychosocial support. We examined the construct and external validity of two instruments, one developed for use in Afghanistan, the other developed by the World Health Organization for use in resource-poor settings. Methods We used data collected on 1003 Afghan adults (500 men, 503 women) randomly sampled at three sites in Afghanistan. We compared the 22-item Afghan Symptom Checklist (ASCL), a culturally-grounded assessment of psychosocial wellbeing, with Pashto and Dari versions of the 20-item Self-Reporting Questionnaire (SRQ-20). We derived subscales using exploratory and confirmatory factor analyses (EFA and CFA) and tested total and subscale scores for external validity with respect to lifetime trauma and household wealth using block model regressions. Results EFA suggested a three-factor structure for SRQ-20 - somatic complaints, negative affect, and emotional numbing - and a two-factor structure for ASCL - jigar khun (dysphoria) and aggression. Both factor models were supported by CFA in separate subsamples. Women had higher scores for each of the five subscales than men (p < 0.001), and larger bivariate associations with trauma (rs .24 to .29, and .10 to .19, women and men respectively) and household wealth (rs -.27 to -.39, and .05 to -.22, respectively). The three SRQ-20 subscales and the ASCL jigar khun subscale were equally associated with variance in trauma exposures. However, interactions between gender and jigar khun suggested that, relative to SRQ-20, the jigar khun subscale was more strongly associated with household wealth for women; similarly, gender interactions with aggression indicated that the aggression subscale was more strongly associated with trauma and wealth. Conclusions Two central elements of Afghan conceptualizations of mental distress - aggression and the syndrome jigar khun – were captured by the ASCL and not by the SRQ-20. The appropriateness of the culturally-grounded instrument was more salient for women, indicating that the validity of instruments may be gender-differentiated. Transcultural validation processes for tools measuring mental distress need to explicitly take gender into account. Culturally relevant measures are worth developing for long-term psychosocial programming. PMID:25034331

  8. Comparing the validity of the self reporting questionnaire and the Afghan symptom checklist: dysphoria, aggression, and gender in transcultural assessment of mental health.

    PubMed

    Rasmussen, Andrew; Ventevogel, Peter; Sancilio, Amelia; Eggerman, Mark; Panter-Brick, Catherine

    2014-07-18

    The relative performance of local and international assessment instruments is subject to ongoing discussion in transcultural research on mental health and psychosocial support. We examined the construct and external validity of two instruments, one developed for use in Afghanistan, the other developed by the World Health Organization for use in resource-poor settings. We used data collected on 1003 Afghan adults (500 men, 503 women) randomly sampled at three sites in Afghanistan. We compared the 22-item Afghan Symptom Checklist (ASCL), a culturally-grounded assessment of psychosocial wellbeing, with Pashto and Dari versions of the 20-item Self-Reporting Questionnaire (SRQ-20). We derived subscales using exploratory and confirmatory factor analyses (EFA and CFA) and tested total and subscale scores for external validity with respect to lifetime trauma and household wealth using block model regressions. EFA suggested a three-factor structure for SRQ-20--somatic complaints, negative affect, and emotional numbing--and a two-factor structure for ASCL--jigar khun (dysphoria) and aggression. Both factor models were supported by CFA in separate subsamples. Women had higher scores for each of the five subscales than men (p < 0.001), and larger bivariate associations with trauma (rs .24 to .29, and .10 to .19, women and men respectively) and household wealth (rs -.27 to -.39, and .05 to -.22, respectively). The three SRQ-20 subscales and the ASCL jigar khun subscale were equally associated with variance in trauma exposures. However, interactions between gender and jigar khun suggested that, relative to SRQ-20, the jigar khun subscale was more strongly associated with household wealth for women; similarly, gender interactions with aggression indicated that the aggression subscale was more strongly associated with trauma and wealth. Two central elements of Afghan conceptualizations of mental distress--aggression and the syndrome jigar khun--were captured by the ASCL and not by the SRQ-20. The appropriateness of the culturally-grounded instrument was more salient for women, indicating that the validity of instruments may be gender-differentiated. Transcultural validation processes for tools measuring mental distress need to explicitly take gender into account. Culturally relevant measures are worth developing for long-term psychosocial programming.

  9. A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Wyss, Gregory Dane

    2004-07-01

    This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a librarymore » that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.« less

  10. ANITA-2000 activation code package - updating of the decay data libraries and validation on the experimental data of the 14 MeV Frascati Neutron Generator

    NASA Astrophysics Data System (ADS)

    Frisoni, Manuela

    2016-03-01

    ANITA-2000 is a code package for the activation characterization of materials exposed to neutron irradiation released by ENEA to OECD-NEADB and ORNL-RSICC. The main component of the package is the activation code ANITA-4M that computes the radioactive inventory of a material exposed to neutron irradiation. The code requires the decay data library (file fl1) containing the quantities describing the decay properties of the unstable nuclides and the library (file fl2) containing the gamma ray spectra emitted by the radioactive nuclei. The fl1 and fl2 files of the ANITA-2000 code package, originally based on the evaluated nuclear data library FENDL/D-2.0, were recently updated on the basis of the JEFF-3.1.1 Radioactive Decay Data Library. This paper presents the results of the validation of the new fl1 decay data library through the comparison of the ANITA-4M calculated values with the measured electron and photon decay heats and activities of fusion material samples irradiated at the 14 MeV Frascati Neutron Generator (FNG) of the NEA-Frascati Research Centre. Twelve material samples were considered, namely: Mo, Cu, Hf, Mg, Ni, Cd, Sn, Re, Ti, W, Ag and Al. The ratios between calculated and experimental values (C/E) are shown and discussed in this paper.

  11. VizieR Online Data Catalog: SDSS-DR8 galaxies classified by WND-CHARM (Kuminski+, 2016)

    NASA Astrophysics Data System (ADS)

    Kuminski, E.; Shamir, L.

    2016-06-01

    The image analysis method used to classify the images is WND-CHARM (wndchrm; Shamir et al. 2008, BMC Source Code for Biology and Medicine, 3: 13; 2010PLSCB...6E0974S; 2013ascl.soft12002S), which first computes 2885 numerical descriptors from each SDSS image such as textures, edges, shapes), the statistical distribution of the pixel intensities, the polynomial decomposition of the image, and fractal features. These features are extracted from the raw pixels, as well as the image transforms and multi-order image transforms. See section 2 for further explanations. In a similar way than the catalog we also compiled a catalog of all objects with spectra in DR8. For each object, that catalog contains the spec ObjID, the R.A., the decl., the z, z error, the certainty of classification as elliptical, the certainty of classification as spiral, and the certainty of classification as a star. See section 3.1 for further explanations. (2 data files).

  12. The Astrophysics Source Code Library: Where Do We Go from Here?

    NASA Astrophysics Data System (ADS)

    Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J.

    2014-05-01

    The Astrophysics Source Code Library1, started in 1999, has in the past three years grown from a repository for 40 codes to a registry of over 700 codes that are now indexed by ADS. What comes next? We examine the future of the , the challenges facing it, the rationale behind its practices, and the need to balance what we might do with what we have the resources to accomplish.

  13. Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation

    NASA Astrophysics Data System (ADS)

    Ragni, Matteo

    There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.

  14. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  15. Creating Tomorrow's Technologists: Contrasting Information Technology Curriculum in North American Library and Information Science Graduate Programs against Code4lib Job Listings

    ERIC Educational Resources Information Center

    Maceli, Monica

    2015-01-01

    This research study explores technology-related course offerings in ALA-accredited library and information science (LIS) graduate programs in North America. These data are juxtaposed against a text analysis of several thousand LIS-specific technology job listings from the Code4lib jobs website. Starting in 2003, as a popular library technology…

  16. Appraisal of the Effectiveness of CODE; The Coordinated Delivery System for the South Central Research Library Council, January to December 1970.

    ERIC Educational Resources Information Center

    Faibisoff, Sylvia G.

    A major concern of the South Central Research Library Council in establishing an interlibrary loan network was the development of a Coordinated Delivery system (CODE). Several means of delivery were considered--the U.S. mails, commercial trucking (Greyhound, United Parcel Service), and use of the public library system's delivery services. A…

  17. Revised Extended Grid Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, Roger L.

    The Revised Eolus Grid Library (REGL) is a mesh-tracking library that was developed for use with the MCNP6TM computer code so that (radiation) particles can track on an unstructured mesh. The unstructured mesh is a finite element representation of any geometric solid model created with a state-of-the-art CAE/CAD tool. The mesh-tracking library is written using modern Fortran and programming standards; the library is Fortran 2003 compliant. The library was created with a defined application programmer interface (API) so that it could easily integrate with other particle tracking/transport codes. The library does not handle parallel processing via the message passing interfacemore » (mpi), but has been used successfully where the host code handles the mpi calls. The library is thread-safe and supports the OpenMP paradigm. As a library, all features are available through the API and overall a tight coupling between it and the host code is required. Features of the library are summarized with the following list: Can accommodate first and second order 4, 5, and 6-sided polyhedra; any combination of element types may appear in a single geometry model; parts may not contain tetrahedra mixed with other element types; pentahedra and hexahedra can be together in the same part; robust handling of overlaps and gaps; tracks element-to-element to produce path length results at the element level; finds element numbers for a given mesh location; finds intersection points on element faces for the particle tracks; produce a data file for post processing results analysis; reads Abaqus .inp input (ASCII) files to obtain information for the global mesh-model; supports parallel input processing via mpi; and support parallel particle transport by both mpi and OpenMP.« less

  18. Game-Coding Workshops in New Zealand Public Libraries: Evaluation of a Pilot Project

    ERIC Educational Resources Information Center

    Bolstad, Rachel

    2016-01-01

    This report evaluates a game coding workshop offered to young people and adults in seven public libraries round New Zealand. Participants were taken step by step through the process of creating their own simple 2D videogame, learning the basics of coding, computational thinking, and digital game design. The workshops were free and drew 426 people…

  19. Java Source Code Analysis for API Migration to Embedded Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Victor; McCoy, James A.; Guerrero, Jonathan

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered bymore » APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.« less

  20. Expression at the imprinted dlk1-gtl2 locus is regulated by proneural genes in the developing telencephalon.

    PubMed

    Seibt, Julie; Armant, Olivier; Le Digarcher, Anne; Castro, Diogo; Ramesh, Vidya; Journot, Laurent; Guillemot, François; Vanderhaeghen, Pierre; Bouschet, Tristan

    2012-01-01

    Imprinting is an epigenetic mechanism that restrains the expression of about 100 genes to one allele depending on its parental origin. Several imprinted genes are implicated in neurodevelopmental brain disorders, such as autism, Angelman, and Prader-Willi syndromes. However, how expression of these imprinted genes is regulated during neural development is poorly understood. Here, using single and double KO animals for the transcription factors Neurogenin2 (Ngn2) and Achaete-scute homolog 1 (Ascl1), we found that the expression of a specific subset of imprinted genes is controlled by these proneural genes. Using in situ hybridization and quantitative PCR, we determined that five imprinted transcripts situated at the Dlk1-Gtl2 locus (Dlk1, Gtl2, Mirg, Rian, Rtl1) are upregulated in the dorsal telencephalon of Ngn2 KO mice. This suggests that Ngn2 influences the expression of the entire Dlk1-Gtl2 locus, independently of the parental origin of the transcripts. Interestingly 14 other imprinted genes situated at other imprinted loci were not affected by the loss of Ngn2. Finally, using Ngn2/Ascl1 double KO mice, we show that the upregulation of genes at the Dlk1-Gtl2 locus in Ngn2 KO animals requires a functional copy of Ascl1. Our data suggest a complex interplay between proneural genes in the developing forebrain that control the level of expression at the imprinted Dlk1-Gtl2 locus (but not of other imprinted genes). This raises the possibility that the transcripts of this selective locus participate in the biological effects of proneural genes in the developing telencephalon.

  1. Expression at the Imprinted Dlk1-Gtl2 Locus Is Regulated by Proneural Genes in the Developing Telencephalon

    PubMed Central

    Seibt, Julie; Armant, Olivier; Le Digarcher, Anne; Castro, Diogo; Ramesh, Vidya; Journot, Laurent; Guillemot, François; Vanderhaeghen, Pierre; Bouschet, Tristan

    2012-01-01

    Imprinting is an epigenetic mechanism that restrains the expression of about 100 genes to one allele depending on its parental origin. Several imprinted genes are implicated in neurodevelopmental brain disorders, such as autism, Angelman, and Prader-Willi syndromes. However, how expression of these imprinted genes is regulated during neural development is poorly understood. Here, using single and double KO animals for the transcription factors Neurogenin2 (Ngn2) and Achaete-scute homolog 1 (Ascl1), we found that the expression of a specific subset of imprinted genes is controlled by these proneural genes. Using in situ hybridization and quantitative PCR, we determined that five imprinted transcripts situated at the Dlk1-Gtl2 locus (Dlk1, Gtl2, Mirg, Rian, Rtl1) are upregulated in the dorsal telencephalon of Ngn2 KO mice. This suggests that Ngn2 influences the expression of the entire Dlk1-Gtl2 locus, independently of the parental origin of the transcripts. Interestingly 14 other imprinted genes situated at other imprinted loci were not affected by the loss of Ngn2. Finally, using Ngn2/Ascl1 double KO mice, we show that the upregulation of genes at the Dlk1-Gtl2 locus in Ngn2 KO animals requires a functional copy of Ascl1. Our data suggest a complex interplay between proneural genes in the developing forebrain that control the level of expression at the imprinted Dlk1-Gtl2 locus (but not of other imprinted genes). This raises the possibility that the transcripts of this selective locus participate in the biological effects of proneural genes in the developing telencephalon. PMID:23139813

  2. Influence of elevated-CRP level-related polymorphisms in non-rheumatic Caucasians on the risk of subclinical atherosclerosis and cardiovascular disease in rheumatoid arthritis

    PubMed Central

    López-Mejías, Raquel; Genre, Fernanda; Remuzgo-Martínez, Sara; González-Juanatey, Carlos; Robustillo-Villarino, Montserrat; Llorca, Javier; Corrales, Alfonso; Vicente, Esther; Miranda-Filloy, José A.; Magro, César; Tejera-Segura, Beatriz; Ramírez Huaranga, Marco A.; Pina, Trinitario; Blanco, Ricardo; Alegre-Sancho, Juan J.; Raya, Enrique; Mijares, Verónica; Ubilla, Begoña; Mínguez Sánchez, María D.; Gómez-Vaquero, Carmen; Balsa, Alejandro; Pascual-Salcedo, Dora; López-Longo, Francisco J.; Carreira, Patricia; González-Álvaro, Isidoro; Rodríguez-Rodríguez, Luis; Fernández-Gutiérrez, Benjamín; Ferraz-Amaro, Iván; Castañeda, Santos; Martín, Javier; González-Gay, Miguel A.

    2016-01-01

    Association between elevated C-reactive protein (CRP) serum levels and subclinical atherosclerosis and cardiovascular (CV) events was described in rheumatoid arthritis (RA). CRP, HNF1A, LEPR, GCKR, NLRP3, IL1F10, PPP1R3B, ASCL1, HNF4A and SALL1 exert an influence on elevated CRP serum levels in non-rheumatic Caucasians. Consequently, we evaluated the potential role of these genes in the development of CV events and subclinical atherosclerosis in RA patients. Three tag CRP polymorphisms and HNF1A, LEPR, GCKR, NLRP3, IL1F10, PPP1R3B, ASCL1, HNF4A and SALL1 were genotyped in 2,313 Spanish patients by TaqMan. Subclinical atherosclerosis was determined in 1,298 of them by carotid ultrasonography (by assessment of carotid intima-media thickness-cIMT-and presence/absence of carotid plaques). CRP serum levels at diagnosis and at the time of carotid ultrasonography were measured in 1,662 and 1,193 patients, respectively, by immunoturbidimetry. Interestingly, a relationship between CRP and CRP serum levels at diagnosis and at the time of the carotid ultrasonography was disclosed. However, no statistically significant differences were found when CRP, HNF1A, LEPR, GCKR, NLRP3, IL1F10, PPP1R3B, ASCL1, HNF4A and SALL1 were evaluated according to the presence/absence of CV events, carotid plaques and cIMT after adjustment. Our results do not confirm an association between these genes and CV disease in RA. PMID:27534721

  3. TRIQS: A toolbox for research on interacting quantum systems

    NASA Astrophysics Data System (ADS)

    Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka

    2015-11-01

    We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.

  4. NSWC Library of Mathematics Subroutines

    DTIC Science & Technology

    1993-01-01

    standards concerning in-line documentation and the style of code cannot be imposed. In generel, all supportive subreutines not intended for direct use are...proprietary or otherwise restricted codes have been permitted ;’ the library. Only general purpose mathematical subroutines for use by the entire NSWCDD...where the source codes are frequently of prime importance), and for general use in applications. Since expertise is so widely scattered, reliable

  5. Transcriptional Mechanisms of Proneural Factors and REST in Regulating Neuronal Reprogramming of Astrocytes

    PubMed Central

    Masserdotti, Giacomo; Gillotin, Sébastien; Sutor, Bernd; Drechsel, Daniela; Irmler, Martin; Jørgensen, Helle F.; Sass, Steffen; Theis, Fabian J.; Beckers, Johannes; Berninger, Benedikt; Guillemot, François; Götz, Magdalena

    2015-01-01

    Summary Direct lineage reprogramming induces dramatic shifts in cellular identity, employing poorly understood mechanisms. Recently, we demonstrated that expression of Neurog2 or Ascl1 in postnatal mouse astrocytes generates glutamatergic or GABAergic neurons. Here, we take advantage of this model to study dynamics of neuronal cell fate acquisition at the transcriptional level. We found that Neurog2 and Ascl1 rapidly elicited distinct neurogenic programs with only a small subset of shared target genes. Within this subset, only NeuroD4 could by itself induce neuronal reprogramming in both mouse and human astrocytes, while co-expression with Insm1 was required for glutamatergic maturation. Cultured astrocytes gradually became refractory to reprogramming, in part by the repressor REST preventing Neurog2 from binding to the NeuroD4 promoter. Notably, in astrocytes refractory to Neurog2 activation, the underlying neurogenic program remained amenable to reprogramming by exogenous NeuroD4. Our findings support a model of temporal hierarchy for cell fate change during neuronal reprogramming. PMID:26119235

  6. The View Behind and Ahead: Implications of Certification *

    PubMed Central

    Darling, Louise

    1973-01-01

    The Medical Library Association's certification plan, never of real significance in employment and promotion practices in health sciences librarianship, does not reflect the many changes which have occurred in swift progression since adoption of the code in 1949. Solutions to the problems which have accumulated since then are sought in a brief examination of trends in credentialing and certification in the health professions and in the library field, both general and special. Emphasis is given to the historical development of provisions in the MLA Code for the Training and Certification of Medical Librarians, the limited opportunity for practical implementation of most of the provisions, the importance of the code in stimulating the Association's educational programs, the impact of the Medical Library Assistance Act, Regional Medical Programs, and increases in demand for health information on manpower requirements for health science libraries, the specific dissatisfactions MLA members have expressed over certification, and the role of the Ad Hoc Committee to Develop a New Certification Code. PMID:4744343

  7. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  8. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE PAGES

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...

    2018-02-05

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  9. Zebra: An advanced PWR lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, L.; Wu, H.; Zheng, Y.

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less

  10. Open-Source Python Tools for Deploying Interactive GIS Dashboards for a Billion Datapoints on a Laptop

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Bednar, J. A.; Rudiger, P.; Stevens, J. L. R.; Ball, C. E.; Christensen, S. D.; Pothina, D.

    2017-12-01

    The rich variety of software libraries available in the Python scientific ecosystem provides a flexible and powerful alternative to traditional integrated GIS (geographic information system) programs. Each such library focuses on doing a certain set of general-purpose tasks well, and Python makes it relatively simple to glue the libraries together to solve a wide range of complex, open-ended problems in Earth science. However, choosing an appropriate set of libraries can be challenging, and it is difficult to predict how much "glue code" will be needed for any particular combination of libraries and tasks. Here we present a set of libraries that have been designed to work well together to build interactive analyses and visualizations of large geographic datasets, in standard web browsers. The resulting workflows run on ordinary laptops even for billions of data points, and easily scale up to larger compute clusters when available. The declarative top-level interface used in these libraries means that even complex, fully interactive applications can be built and deployed as web services using only a few dozen lines of code, making it simple to create and share custom interactive applications even for datasets too large for most traditional GIS systems. The libraries we will cover include GeoViews (HoloViews extended for geographic applications) for declaring visualizable/plottable objects, Bokeh for building visual web applications from GeoViews objects, Datashader for rendering arbitrarily large datasets faithfully as fixed-size images, Param for specifying user-modifiable parameters that model your domain, Xarray for computing with n-dimensional array data, Dask for flexibly dispatching computational tasks across processors, and Numba for compiling array-based Python code down to fast machine code. We will show how to use the resulting workflow with static datasets and with simulators such as GSSHA or AdH, allowing you to deploy flexible, high-performance web-based dashboards for your GIS data or simulations without needing major investments in code development or maintenance.

  11. LIBVERSIONINGCOMPILER: An easy-to-use library for dynamic generation and invocation of multiple code versions

    NASA Astrophysics Data System (ADS)

    Cherubin, S.; Agosta, G.

    2018-01-01

    We present LIBVERSIONINGCOMPILER, a C++ library designed to support the dynamic generation of multiple versions of the same compute kernel in a HPC scenario. It can be used to provide continuous optimization, code specialization based on the input data or on workload changes, or otherwise to dynamically adjust the application, without the burden of a full dynamic compiler. The library supports multiple underlying compilers but specifically targets the LLVM framework. We also provide examples of use, showing the overhead of the library, and providing guidelines for its efficient use.

  12. The POPOP4 library and codes for preparing secondary gamma-ray production cross sections

    NASA Technical Reports Server (NTRS)

    Ford, W. E., III

    1972-01-01

    The POPOP4 code for converting secondary gamma ray yield data to multigroup secondary gamma ray production cross sections and the POPOP4 library of secondary gamma ray yield data are described. Recent results of the testing of uranium and iron data sets from the POPOP4 library are given. The data sets were tested by comparing calculated secondary gamma ray pulse height spectra measured at the ORNL TSR-II reactor.

  13. Cataloguing and Classification Section. Bibliographic Control Division. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    Papers on cataloging, classification, and coding systems which were presented at the 1982 International Federation of Library Associations (IFLA) conference include: (1) "Numbering and Coding Systems for Bibliographic Control in Use in North America" by Lois Mai Chan (United States); (2) "A Project Undertaken by the Library of…

  14. Learning the Real-World Skills of the 21st Century

    ERIC Educational Resources Information Center

    Joyce, Patricia

    2008-01-01

    This article describes a summer internship program at South Houston High School which utilizes an innovative curriculum to teach students 21st century skills alongside core academics. Using the Transitions career education curriculum--a comprehensive curriculum created by ASCL Educational Services to fulfill Chicago Public Schools' need for soft…

  15. PVM Wrapper

    NASA Technical Reports Server (NTRS)

    Katz, Daniel

    2004-01-01

    PVM Wrapper is a software library that makes it possible for code that utilizes the Parallel Virtual Machine (PVM) software library to run using the message-passing interface (MPI) software library, without needing to rewrite the entire code. PVM and MPI are the two most common software libraries used for applications that involve passing of messages among parallel computers. Since about 1996, MPI has been the de facto standard. Codes written when PVM was popular often feature patterns of {"initsend," "pack," "send"} and {"receive," "unpack"} calls. In many cases, these calls are not contiguous and one set of calls may even exist over multiple subroutines. These characteristics make it difficult to obtain equivalent functionality via a single MPI "send" call. Because PVM Wrapper is written to run with MPI- 1.2, some PVM functions are not permitted and must be replaced - a task that requires some programming expertise. The "pvm_spawn" and "pvm_parent" function calls are not replaced, but a programmer can use "mpirun" and knowledge of the ranks of parent and child tasks with supplied macroinstructions to enable execution of codes that use "pvm_spawn" and "pvm_parent."

  16. Development of ENDF/B-IV multigroup neutron cross-section libraries for the LEOPARD and LASER codes. Technical report on Phase 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenquin, U.P.; Stewart, K.B.; Heeb, C.M.

    1975-07-01

    The principal aim of this neutron cross-section research is to provide the utility industry with a 'standard nuclear data base' that will perform satisfactorily when used for analysis of thermal power reactor systems. EPRI is coordinating its activities with those of the Cross Section Evaluation Working Group (CSEWG), responsible for the development of the Evaluated Nuclear Data File-B (ENDF/B) library, in order to improve the performance of the ENDF/B library in thermal reactors and other applications of interest to the utility industry. Battelle-Northwest (BNW) was commissioned to process the ENDF/B Version-4 data files into a group-constant form for use inmore » the LASER and LEOPARD neutronics codes. Performance information on the library should provide the necessary feedback for improving the next version of the library, and a consistent data base is expected to be useful in intercomparing the versions of the LASER and LEOPARD codes presently being used by different utility groups. This report describes the BNW multi-group libraries and the procedures followed in their preparation and testing. (GRA)« less

  17. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  18. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  19. Texas Library Systems Act and Rules for Administering the Library Systems Act.

    ERIC Educational Resources Information Center

    Texas State Library, Austin. Dept. of Library Development.

    This guide to the administration of the Library Systems Act for the State of Texas begins by presenting the text of the Library Systems Act. The relevant regulations from the Texas Administrative Code are then provided, covering such topics as standards for accreditation of a major resource system of libraries, minimum standards for accreditation…

  20. Evaluating Library Staff: A Performance Appraisal System.

    ERIC Educational Resources Information Center

    Belcastro, Patricia

    This manual provides librarians and library managers with a performance appraisal system that measures staff fairly and objectively and links performance to the goals of the library. The following topics are addressed: (1) identifying expectations for quality service or standards of performance; (2) the importance of a library's code of service,…

  1. California Library Laws, 2008

    ERIC Educational Resources Information Center

    Smith, Paul G., Ed.

    2008-01-01

    "California Library Laws 2008" is a selective guide to state laws and related materials that most directly affect the everyday operations of public libraries and organizations that work with public libraries. It is intended as a convenient reference, not as a replacement for the annotated codes or for legal advice. The guide is organized…

  2. getimages: Background derivation and image flattening method

    NASA Astrophysics Data System (ADS)

    Men'shchikov, Alexander

    2017-05-01

    getimages performs background derivation and image flattening for high-resolution images obtained with space observatories. It is based on median filtering with sliding windows corresponding to a range of spatial scales from the observational beam size up to a maximum structure width X. The latter is a single free parameter of getimages that can be evaluated manually from the observed image. The median filtering algorithm provides a background image for structures of all widths below X. The same median filtering procedure applied to an image of standard deviations derived from a background-subtracted image results in a flattening image. Finally, a flattened image is computed by dividing the background-subtracted by the flattening image. Standard deviations in the flattened image are now uniform outside sources and filaments. Detecting structures in such radically simplified images results in much cleaner extractions that are more complete and reliable. getimages also reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images. The code (a Bash script) uses FORTRAN utilities from getsources (ascl:1507.014), which must be installed.

  3. Codes, Costs, and Critiques: The Organization of Information in "Library Quarterly", 1931-2004

    ERIC Educational Resources Information Center

    Olson, Hope A.

    2006-01-01

    This article reports the results of a quantitative and thematic content analysis of the organization of information literature in the "Library Quarterly" ("LQ") between its inception in 1931 and 2004. The majority of articles in this category were published in the first half of "LQ's" run. Prominent themes have included cataloging codes and the…

  4. DECEL1 Users Manual. A Fortran IV Program for Computing the Static Deflections of Structural Cable Arrays.

    DTIC Science & Technology

    1980-08-01

    knots Figure 14. Current profile. 84 6; * .4. 0 E U U U -~ U U (.4 U @0 85 I UECfLI ?E)r eAtE NjKC 7 frCAd I o .,01 U.I 75o* ANL I U,) I000. 0.) AKC 3 U...NAVSCOLCECOFF C35 Port Hueneme, CA NAVSEASYSCOM Code SEA OOC Washington. DC NAVSEC Code 6034 (Library), Washington DC NAVSHIPREPFAC Library. Guam NAVSHIPYD Code

  5. Design of self-coded combinatorial libraries to facilitate direct analysis of ligands by mass spectrometry.

    PubMed

    Hughes, I

    1998-09-24

    The direct analysis of selected components from combinatorial libraries by sensitive methods such as mass spectrometry is potentially more efficient than deconvolution and tagging strategies since additional steps of resynthesis or introduction of molecular tags are avoided. A substituent selection procedure is described that eliminates the mass degeneracy commonly observed in libraries prepared by "split-and-mix" methods, without recourse to high-resolution mass measurements. A set of simple rules guides the choice of substituents such that all components of the library have unique nominal masses. Additional rules extend the scope by ensuring that characteristic isotopic mass patterns distinguish isobaric components. The method is applicable to libraries having from two to four varying substituent groups and can encode from a few hundred to several thousand components. No restrictions are imposed on the manner in which the "self-coded" library is synthesized or screened.

  6. Applying Adaptive Swarm Intelligence Technology with Structuration in Web-Based Collaborative Learning

    ERIC Educational Resources Information Center

    Huang, Yueh-Min; Liu, Chien-Hung

    2009-01-01

    One of the key challenges in the promotion of web-based learning is the development of effective collaborative learning environments. We posit that the structuration process strongly influences the effectiveness of technology used in web-based collaborative learning activities. In this paper, we propose an ant swarm collaborative learning (ASCL)…

  7. Photutils: Photometry tools

    NASA Astrophysics Data System (ADS)

    Bradley, Larry; Sipocz, Brigitta; Robitaille, Thomas; Tollerud, Erik; Deil, Christoph; Vinícius, Zè; Barbary, Kyle; Günther, Hans Moritz; Bostroem, Azalee; Droettboom, Michael; Bray, Erik; Bratholm, Lars Andersen; Pickering, T. E.; Craig, Matt; Pascual, Sergio; Greco, Johnny; Donath, Axel; Kerzendorf, Wolfgang; Littlefair, Stuart; Barentsen, Geert; D'Eugenio, Francesco; Weaver, Benjamin Alan

    2016-09-01

    Photutils provides tools for detecting and performing photometry of astronomical sources. It can estimate the background and background rms in astronomical images, detect sources in astronomical images, estimate morphological parameters of those sources (e.g., centroid and shape parameters), and perform aperture and PSF photometry. Written in Python, it is an affiliated package of Astropy (ascl:1304.002).

  8. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  9. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    NASA Astrophysics Data System (ADS)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  10. MT71x: Multi-Temperature Library Based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard

    The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less

  11. Chat Widgets for Science Libraries

    ERIC Educational Resources Information Center

    Meier, John J.

    2008-01-01

    This paper describes chat widgets, chunks of code that can be embedded on a web site to appear as an instant messaging system, and how they can be used on a science library web site to better serve library users. Interviews were conducted concerning experiences at science and humanities libraries and more similarities than differences were…

  12. California Library Laws, 2009

    ERIC Educational Resources Information Center

    Smith, Paul G., Ed.

    2009-01-01

    California Library Laws 2009 is a selective guide to state laws and related materials that most directly affect the everyday operations of public libraries and organizations that work with public libraries. It is intended as a convenient reference, not as a replacement for the annotated codes or for legal advice. The guide is organized as follows.…

  13. Access to Electronic Information, Services, and Networks: An Interpretation of the Library Bill of Rights.

    ERIC Educational Resources Information Center

    American Library Association, Chicago, IL. Office of Intellectual Freedom.

    The American Library Association (ALA) expresses the basic principles of librarianship in its "Code of Ethics" and in the "Library Bill of Rights" and its interpretations. All library system and network policies, procedures or regulations relating to electronic resources and services should be scrutinized for potential…

  14. ANITA-IEAF activation code package - updating of the decay and cross section data libraries and validation on the experimental data from the Karlsruhe Isochronous Cyclotron

    NASA Astrophysics Data System (ADS)

    Frisoni, Manuela

    2017-09-01

    ANITA-IEAF is an activation package (code and libraries) developed in the past in ENEA-Bologna in order to assess the activation of materials exposed to neutrons with energies greater than 20 MeV. An updated version of the ANITA-IEAF activation code package has been developed. It is suitable to be applied to the study of the irradiation effects on materials in facilities like the International Fusion Materials Irradiation Facility (IFMIF) and the DEMO Oriented Neutron Source (DONES), in which a considerable amount of neutrons with energies above 20 MeV is produced. The present paper summarizes the main characteristics of the updated version of ANITA-IEAF, able to use decay and cross section data based on more recent evaluated nuclear data libraries, i.e. the JEFF-3.1.1 Radioactive Decay Data Library and the EAF-2010 neutron activation cross section library. In this paper the validation effort related to the comparison between the code predictions and the activity measurements obtained from the Karlsruhe Isochronous Cyclotron is presented. In this integral experiment samples of two different steels, SS-316 and F82H, pure vanadium and a vanadium alloy, structural materials of interest in fusion technology, were activated in a neutron spectrum similar to the IFMIF neutron field.

  15. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  16. California State Library: Processing Center Design and Specifications. Volume III, Coding Manual.

    ERIC Educational Resources Information Center

    Sherman, Don; Shoffner, Ralph M.

    As part of the report on the California State Library Processing Center design and specifications, this volume is a coding manual for the conversion of catalog card data to a machine-readable form. The form is compatible with the national MARC system, while at the same time it contains provisions for problems peculiar to the local situation. This…

  17. Particle-in-cell code library for numerical simulation of the ECR source plasma

    NASA Astrophysics Data System (ADS)

    Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.

    2003-05-01

    The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.

  18. The effects of nuclear data library processing on Geant4 and MCNP simulations of the thermal neutron scattering law

    NASA Astrophysics Data System (ADS)

    Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.

    2018-05-01

    Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.

  19. Open source clustering software.

    PubMed

    de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S

    2004-06-12

    We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.

  20. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  1. FORTRAN multitasking library for use on the ELXSI 6400 and the CRAY XMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.

    1985-07-16

    A library of FORTRAN-based multitasking routines has been written for the ELXSI 6400 and the CRAY XMP. This library is designed to make multitasking codes easily transportable between machines with different hardware configurations. The library provides enhanced error checking and diagnostics over vendor-supplied multitasking intrinsics. The library also contains multitasking control structures not normally supplied by the vendor.

  2. FY2012 summary of tasks completed on PROTEUS-thermal work.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.H.; Smith, M.A.

    2012-06-06

    PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less

  3. Application Reuse Library for Software, Requirements, and Guidelines

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Thronesbery, Carroll

    1994-01-01

    Better designs are needed for expert systems and other operations automation software, for more reliable, usable and effective human support. A prototype computer-aided Application Reuse Library shows feasibility of supporting concurrent development and improvement of advanced software by users, analysts, software developers, and human-computer interaction experts. Such a library expedites development of quality software, by providing working, documented examples, which support understanding, modification and reuse of requirements as well as code. It explicitly documents and implicitly embodies design guidelines, standards and conventions. The Application Reuse Library provides application modules with Demo-and-Tester elements. Developers and users can evaluate applicability of a library module and test modifications, by running it interactively. Sub-modules provide application code and displays and controls. The library supports software modification and reuse, by providing alternative versions of application and display functionality. Information about human support and display requirements is provided, so that modifications will conform to guidelines. The library supports entry of new application modules from developers throughout an organization. Example library modules include a timer, some buttons and special fonts, and a real-time data interface program. The library prototype is implemented in the object-oriented G2 environment for developing real-time expert systems.

  4. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    NASA Astrophysics Data System (ADS)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  5. MatProps: Material Properties Database and Associated Access Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrenberger, J K; Becker, R C; Goto, D M

    2007-08-13

    Coefficients for analytic constitutive and equation of state models (EOS), which are used by many hydro codes at LLNL, are currently stored in a legacy material database (Steinberg, UCRL-MA-106349). Parameters for numerous materials are available through this database, and include Steinberg-Guinan and Steinberg-Lund constitutive models for metals, JWL equations of state for high explosives, and Mie-Gruniesen equations of state for metals. These constitutive models are used in most of the simulations done by ASC codes today at Livermore. Analytic EOSs are also still used, but have been superseded in many cases by tabular representations in LEOS (http://leos.llnl.gov). Numerous advanced constitutivemore » models have been developed and implemented into ASC codes over the past 20 years. These newer models have more physics and better representations of material strength properties than their predecessors, and therefore more model coefficients. However, a material database of these coefficients is not readily available. Therefore incorporating these coefficients with those of the legacy models into a portable database that could be shared amongst codes would be most welcome. The goal of this paper is to describe the MatProp effort at LLNL to create such a database and associated access library that could be used by codes throughout the DOE complex and beyond. We have written an initial version of the MatProp database and access library and our DOE/ASC code ALE3D (Nichols et. al., UCRL-MA-152204) is able to import information from the database. The database, a link to which exists on the Sourceforge server at LLNL, contains coefficients for many materials and models (see Appendix), and includes material parameters in the following categories--flow stress, shear modulus, strength, damage, and equation of state. Future versions of the Matprop database and access library will include the ability to read and write material descriptions that can be exchanged between codes. It will also include an ability to do unit changes, i.e. have the library return parameters in user-specified unit systems. In addition to these, additional material categories can be added (e.g., phase change kinetics, etc.). The Matprop database and access library is part of a larger set of tools used at LLNL for assessing material model behavior. One of these is MSlib, a shared constitutive material model library. Another is the Material Strength Database (MSD), which allows users to compare parameter fits for specific constitutive models to available experimental data. Together with Matprop, these tools create a suite of capabilities that provide state-of-the-art models and parameters for those models to integrated simulation codes. This document is broken into several appendices. Appendix A contains a code example to retrieve several material coefficients. Appendix B contains the API for the Matprop data access library. Appendix C contains a list of the material names and model types currently available in the Matprop database. Appendix D contains a list of the parameter names for the currently recognized model types. Appendix E contains a full xml description of the material Tantalum.« less

  6. Combined Edition of Family Planning Library Manual and Family Planning Classification.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    This edition combines two previous publications of the Katharine Dexter McCormick Library into one volume: the Family Planning Library Manual, a guide for starting a family planning and population library or information center, and the Family Planning Classification, a coding system for organizing book and non-book materials so that they can be…

  7. Panel Discussion on Libraries and Best Practices in Fair Use

    ERIC Educational Resources Information Center

    Rathemacher, Andree J.

    2012-01-01

    This report covers a panel discussion on the Code of Best Practices in Fair Use for Academic and Research Libraries, published in January 2012 by the Association of Research Libraries (ARL). The panel was held at the Massachusetts Institute of Technology (MIT) on March 23, 2012, and was hosted by the MIT Libraries. Panelists were Patricia…

  8. Preparation of next-generation sequencing libraries using Nextera™ technology: simultaneous DNA fragmentation and adaptor tagging by in vitro transposition.

    PubMed

    Caruccio, Nicholas

    2011-01-01

    DNA library preparation is a common entry point and bottleneck for next-generation sequencing. Current methods generally consist of distinct steps that often involve significant sample loss and hands-on time: DNA fragmentation, end-polishing, and adaptor-ligation. In vitro transposition with Nextera™ Transposomes simultaneously fragments and covalently tags the target DNA, thereby combining these three distinct steps into a single reaction. Platform-specific sequencing adaptors can be added, and the sample can be enriched and bar-coded using limited-cycle PCR to prepare di-tagged DNA fragment libraries. Nextera technology offers a streamlined, efficient, and high-throughput method for generating bar-coded libraries compatible with multiple next-generation sequencing platforms.

  9. Caught in the (Education) Act: Tackling Michael Gove's Education Revolution. Report on 19th November 2011 Conference

    ERIC Educational Resources Information Center

    FORUM: for promoting 3-19 comprehensive education, 2012

    2012-01-01

    A number of significant campaigning organisations and education trades unions--the Anti-Academies Alliance, CASE, Comprehensive Future, Forum, ISCG and the Socialist Educational Association, along with ASCL, ATL, NASUWT and NUT--staged a conference in London on 19 November 2011, with the title 'Caught in the (Education) Act: tackling Michael…

  10. Numerical implementation and oceanographic application of the thermodynamic potentials of water, vapour, ice, seawater and air - Part 2: The library routines

    NASA Astrophysics Data System (ADS)

    Wright, D. G.; Feistel, R.; Reissmann, J. H.; Miyagawa, K.; Jackett, D. R.; Wagner, W.; Overhoff, U.; Guder, C.; Feistel, A.; Marion, G. M.

    2010-03-01

    The SCOR/IAPSO1 Working Group 127 on Thermodynamics and Equation of State of Seawater has prepared recommendations for new methods and algorithms for numerical estimation of the thermophysical properties of seawater. As an outcome of this work, a new International Thermodynamic Equation of Seawater (TEOS-10) was endorsed by IOC/UNESCO2 in June 2009 as the official replacement and extension of the 1980 International Equation of State, EOS-80. As part of this new standard a source code package has been prepared that is now made freely available to users via the World Wide Web. This package includes two libraries referred to as the SIA (Sea-Ice-Air) library and the GSW (Gibbs SeaWater) library. Information on the GSW library may be found on the TEOS-10 web site (http://www.TEOS-10.org). This publication provides an introduction to the SIA library which contains routines to calculate various thermodynamic properties as discussed in the companion paper. The SIA library is very comprehensive, including routines to deal with fluid water, ice, seawater and humid air as well as equilibrium states involving various combinations of these, with equivalent code developed in different languages. The code is hierachically structured in modules that support (i) almost unlimited extension with respect to additional properties or relations, (ii) an extraction of self-contained sub-libraries, (iii) separate updating of the empirical thermodynamic potentials, and (iv) code verification on different platforms and between different languages. Error trapping is implemented to identify when one or more of the primary routines are accessed significantly beyond their established range of validity. The initial version of the SIA library is available in Visual Basic and FORTRAN as a supplement to this publication and updates will be maintained on the TEOS-10 web site. 1 SCOR/IAPSO: Scientific Committee on Oceanic Research/International Association for the Physical Sciences of the Oceans 2 IOC/UNESCO: Intergovernmental Oceanographic Commission/United Nations Educational, Scientific and Cultural Organization

  11. Numerical implementation and oceanographic application of the thermodynamic potentials of liquid water, water vapour, ice, seawater and humid air - Part 2: The library routines

    NASA Astrophysics Data System (ADS)

    Wright, D. G.; Feistel, R.; Reissmann, J. H.; Miyagawa, K.; Jackett, D. R.; Wagner, W.; Overhoff, U.; Guder, C.; Feistel, A.; Marion, G. M.

    2010-07-01

    The SCOR/IAPSO1 Working Group 127 on Thermodynamics and Equation of State of Seawater has prepared recommendations for new methods and algorithms for numerical estimation of the the thermophysical properties of seawater. As an outcome of this work, a new International Thermodynamic Equation of Seawater (TEOS-10) was endorsed by IOC/UNESCO2 in June 2009 as the official replacement and extension of the 1980 International Equation of State, EOS-80. As part of this new standard a source code package has been prepared that is now made freely available to users via the World Wide Web. This package includes two libraries referred to as the SIA (Sea-Ice-Air) library and the GSW (Gibbs SeaWater) library. Information on the GSW library may be found on the TEOS-10 web site (http://www.TEOS-10.org). This publication provides an introduction to the SIA library which contains routines to calculate various thermodynamic properties as discussed in the companion paper. The SIA library is very comprehensive, including routines to deal with fluid water, ice, seawater and humid air as well as equilibrium states involving various combinations of these, with equivalent code developed in different languages. The code is hierachically structured in modules that support (i) almost unlimited extension with respect to additional properties or relations, (ii) an extraction of self-contained sub-libraries, (iii) separate updating of the empirical thermodynamic potentials, and (iv) code verification on different platforms and between different languages. Error trapping is implemented to identify when one or more of the primary routines are accessed significantly beyond their established range of validity. The initial version of the SIA library is available in Visual Basic and FORTRAN as a supplement to this publication and updates will be maintained on the TEOS-10 web site. 1SCOR/IAPSO: Scientific Committee on Oceanic Research/International Association for the Physical Sciences of the Oceans 2IOC/UNESCO: Intergovernmental Oceanographic Commission/United Nations Educational, Scientific and Cultural Organization

  12. Neutron Capture Gamma-Ray Libraries for Nuclear Applications

    NASA Astrophysics Data System (ADS)

    Sleaford, B. W.; Firestone, R. B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H. D.

    2011-06-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.

  13. Development of a New 47-Group Library for the CASL Neutronics Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sublet, J.-Ch., E-mail: jean-christophe.sublet@ukaea.uk; Eastwood, J.W.; Morgan, J.G.

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2more » and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.« less

  15. FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.

    2017-01-01

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.

  16. Emotional Intelligence in Library Disaster Response Assistance Teams: Which Competencies Emerged?

    ERIC Educational Resources Information Center

    Wilkinson, Frances C.

    2015-01-01

    This qualitative study examines the relationship between emotional intelligence competencies and the personal attributes of library disaster response assistance team (DRAT) members. Using appreciative inquiry protocol to conduct interviews at two academic libraries, the study presents findings from emergent thematic coding of interview…

  17. Intellectual Freedom

    ERIC Educational Resources Information Center

    Knox, Emily

    2011-01-01

    Support for intellectual freedom, a concept codified in the American Library Association's Library Bill of Rights and Code of Ethics, is one of the core tenets of modern librarianship. According to the most recent interpretation of the Library Bill of Rights, academic librarians are encouraged to incorporate the principles of intellectual freedom…

  18. The Four-Year Liberal Arts College Library: A Descriptive Profile.

    ERIC Educational Resources Information Center

    Buttlar, Lois; Garcha, Rajinder

    1995-01-01

    Presents a study of staffing, services, budgets, collections, and facilities of small academic libraries and offers a statistical and demographic profile of a four-year liberal arts college library. Results are presented in tables, and an appendix lists coding sheets used for the study. (JMV)

  19. Fingerprinting Communication and Computation on HPC Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean

    2010-06-02

    How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less

  20. Parallel processing a three-dimensional free-lagrange code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1989-01-01

    A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten versionmore » of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.« less

  1. Parallel processing a real code: A case history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1988-01-01

    A three-dimensional, time-dependent Free-Lagrange hydrodynamics code has been multitasked and autotasked on a Cray X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the Cray multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The 3-D algorithm has presented a number of problems that simpler algorithms, such as 1-D hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a Cray 1, to a multitasking code are discussed, Autotasking of a rewritten version ofmore » the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given. 8 refs., 13 figs.« less

  2. NASA Electronic Library System (NELS) optimization

    NASA Technical Reports Server (NTRS)

    Pribyl, William L.

    1993-01-01

    This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.

  3. Prediction of the Reactor Antineutrino Flux for the Double Chooz Experiment

    NASA Astrophysics Data System (ADS)

    Jones, Chirstopher LaDon

    This thesis benchmarks the deterministic lattice code, DRAGON, against data, and then applies this code to make a prediction for the antineutrino flux from the Chooz Bl and B2 reactors. Data from the destructive assay of rods from the Takahama-3 reactor and from the SONGS antineutrino detector are used for comparisons. The resulting prediction from the tuned DRAGON code is then compared to the first antineutrino event spectra from Double Chooz. Use of this simulation in nuclear nonproliferation studies is discussed. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  4. The Design of the CCCII and Its Application Considerations in Library Automation.

    ERIC Educational Resources Information Center

    Huang, Jack Kai-tung; And Others

    This paper presents the major characteristics of the Chinese Character Code for Information Interchange (CCCII) and indicates its intended application for the interchange of Chinese information among computer systems and communication facilities, especially in library networks. It is considered sufficient for present day library applications,…

  5. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  6. Trivalent chromatin marks the way in.

    PubMed

    Hysolli, Eriona; Park, In-Hyun

    2013-11-07

    Recently in Cell, Wapinski et al. (2013) investigated the epigenetic mechanisms underlying the direct conversion of fibroblasts to induced neurons (iNs). They found that Ascl1 acts as a pioneer factor at neurogenic loci marked by a closed "trivalent" chromatin state in cells permissive to direct conversion, but not in restrictive cell types. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Integrative genomic profiling of large-cell neuroendocrine carcinomas reveals distinct subtypes of high-grade neuroendocrine lung tumors.

    PubMed

    George, Julie; Walter, Vonn; Peifer, Martin; Alexandrov, Ludmil B; Seidel, Danila; Leenders, Frauke; Maas, Lukas; Müller, Christian; Dahmen, Ilona; Delhomme, Tiffany M; Ardin, Maude; Leblay, Noemie; Byrnes, Graham; Sun, Ruping; De Reynies, Aurélien; McLeer-Florin, Anne; Bosco, Graziella; Malchers, Florian; Menon, Roopika; Altmüller, Janine; Becker, Christian; Nürnberg, Peter; Achter, Viktor; Lang, Ulrich; Schneider, Peter M; Bogus, Magdalena; Soloway, Matthew G; Wilkerson, Matthew D; Cun, Yupeng; McKay, James D; Moro-Sibilot, Denis; Brambilla, Christian G; Lantuejoul, Sylvie; Lemaitre, Nicolas; Soltermann, Alex; Weder, Walter; Tischler, Verena; Brustugun, Odd Terje; Lund-Iversen, Marius; Helland, Åslaug; Solberg, Steinar; Ansén, Sascha; Wright, Gavin; Solomon, Benjamin; Roz, Luca; Pastorino, Ugo; Petersen, Iver; Clement, Joachim H; Sänger, Jörg; Wolf, Jürgen; Vingron, Martin; Zander, Thomas; Perner, Sven; Travis, William D; Haas, Stefan A; Olivier, Magali; Foll, Matthieu; Büttner, Reinhard; Hayes, David Neil; Brambilla, Elisabeth; Fernandez-Cuesta, Lynnette; Thomas, Roman K

    2018-03-13

    Pulmonary large-cell neuroendocrine carcinomas (LCNECs) have similarities with other lung cancers, but their precise relationship has remained unclear. Here we perform a comprehensive genomic (n = 60) and transcriptomic (n = 69) analysis of 75 LCNECs and identify two molecular subgroups: "type I LCNECs" with bi-allelic TP53 and STK11/KEAP1 alterations (37%), and "type II LCNECs" enriched for bi-allelic inactivation of TP53 and RB1 (42%). Despite sharing genomic alterations with adenocarcinomas and squamous cell carcinomas, no transcriptional relationship was found; instead LCNECs form distinct transcriptional subgroups with closest similarity to SCLC. While type I LCNECs and SCLCs exhibit a neuroendocrine profile with ASCL1 high /DLL3 high /NOTCH low , type II LCNECs bear TP53 and RB1 alterations and differ from most SCLC tumors with reduced neuroendocrine markers, a pattern of ASCL1 low /DLL3 low /NOTCH high , and an upregulation of immune-related pathways. In conclusion, LCNECs comprise two molecularly defined subgroups, and distinguishing them from SCLC may allow stratified targeted treatment of high-grade neuroendocrine lung tumors.

  8. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  9. \\Space: A new code to estimate \\temp, \\logg, and elemental abundances

    NASA Astrophysics Data System (ADS)

    Boeche, C.

    2016-09-01

    \\Space is a FORTRAN95 code that derives stellar parameters and elemental abundances from stellar spectra. To derive these parameters, \\Space does not measure equivalent widths of lines nor it uses templates of synthetic spectra, but it employs a new method based on a library of General Curve-Of-Growths. To date \\Space works on the wavelength range 5212-6860 Å and 8400-8921 Å, and at the spectral resolution R=2000-20000. Extensions of these limits are possible. \\Space is a highly automated code suitable for application to large spectroscopic surveys. A web front end to this service is publicly available at http://dc.g-vo.org/SP_ACE together with the library and the binary code.

  10. Albany v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andrew; Phipps, Eric; Ostien, Jakob

    2016-01-13

    The Albany code is a general-purpose finite element code for solving partial differential equations (PDEs). Albany is a research code that demonstrates how a PDE code can be built by interfacing many of the open-source software libraries that are released under Sandia's Trilinos project. Part of the mission of Albany is to be a testbed for new Trilinos libraries, to refine their methods, usability, and interfaces. Albany includes hooks to optimization and uncertainty quantification algorithms, including those in Trilinos as well as those in the Dakota toolkit. Because of this, Albany is a desirable starting point for new code developmentmore » efforts that wish to make heavy use of Trilinos. Albany is both a framework and the host for specific finite element applications. These applications have project names, and can be controlled by configuration option when the code is compiled, but are all developed and released as part of the single Albany code base, These include LCM, QCAD, FELIX, Aeras, and ATO applications.« less

  11. Genome-scale deletion screening of human long non-coding RNAs using a paired-guide RNA CRISPR library

    PubMed Central

    Zhu, Shiyou; Li, Wei; Liu, Jingze; Chen, Chen-Hao; Liao, Qi; Xu, Ping; Xu, Han; Xiao, Tengfei; Cao, Zhongzheng; Peng, Jingyu; Yuan, Pengfei; Brown, Myles; Liu, Xiaole Shirley; Wei, Wensheng

    2017-01-01

    CRISPR/Cas9 screens have been widely adopted to analyse coding gene functions, but high throughput screening of non-coding elements using this method is more challenging, because indels caused by a single cut in non-coding regions are unlikely to produce a functional knockout. A high-throughput method to produce deletions of non-coding DNA is needed. Herein, we report a high throughput genomic deletion strategy to screen for functional long non-coding RNAs (lncRNAs) that is based on a lentiviral paired-guide RNA (pgRNA) library. Applying our screening method, we identified 51 lncRNAs that can positively or negatively regulate human cancer cell growth. We individually validated 9 lncRNAs using CRISPR/Cas9-mediated genomic deletion and functional rescue, CRISPR activation or inhibition, and gene expression profiling. Our high-throughput pgRNA genome deletion method should enable rapid identification of functional mammalian non-coding elements. PMID:27798563

  12. 37 CFR 201.24 - Warning of copyright for software lending by nonprofit libraries.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... software lending by nonprofit libraries. 201.24 Section 201.24 Patents, Trademarks, and Copyrights... copyright for software lending by nonprofit libraries. (a) Definition. A Warning of Copyright for Software... States Code, as amended by the Computer Software Rental Amendments Act of 1990, Public Law 101-650. As...

  13. 37 CFR 201.24 - Warning of copyright for software lending by nonprofit libraries.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... software lending by nonprofit libraries. 201.24 Section 201.24 Patents, Trademarks, and Copyrights... copyright for software lending by nonprofit libraries. (a) Definition. A Warning of Copyright for Software... States Code, as amended by the Computer Software Rental Amendments Act of 1990, Public Law 101-650. As...

  14. 37 CFR 201.24 - Warning of copyright for software lending by nonprofit libraries.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... software lending by nonprofit libraries. 201.24 Section 201.24 Patents, Trademarks, and Copyrights... copyright for software lending by nonprofit libraries. (a) Definition. A Warning of Copyright for Software... States Code, as amended by the Computer Software Rental Amendments Act of 1990, Public Law 101-650. As...

  15. 37 CFR 201.24 - Warning of copyright for software lending by nonprofit libraries.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... software lending by nonprofit libraries. 201.24 Section 201.24 Patents, Trademarks, and Copyrights... copyright for software lending by nonprofit libraries. (a) Definition. A Warning of Copyright for Software... States Code, as amended by the Computer Software Rental Amendments Act of 1990, Public Law 101-650. As...

  16. 37 CFR 201.24 - Warning of copyright for software lending by nonprofit libraries.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... software lending by nonprofit libraries. 201.24 Section 201.24 Patents, Trademarks, and Copyrights U.S... copyright for software lending by nonprofit libraries. (a) Definition. A Warning of Copyright for Software... States Code, as amended by the Computer Software Rental Amendments Act of 1990, Public Law 101-650. As...

  17. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  18. The CECAM Electronic Structure Library: community-driven development of software libraries for electronic structure simulations

    NASA Astrophysics Data System (ADS)

    Oliveira, Micael

    The CECAM Electronic Structure Library (ESL) is a community-driven effort to segregate shared pieces of software as libraries that could be contributed and used by the community. Besides allowing to share the burden of developing and maintaining complex pieces of software, these can also become a target for re-coding by software engineers as hardware evolves, ensuring that electronic structure codes remain at the forefront of HPC trends. In a series of workshops hosted at the CECAM HQ in Lausanne, the tools and infrastructure for the project were prepared, and the first contributions were included and made available online (http://esl.cecam.org). In this talk I will present the different aspects and aims of the ESL and how these can be useful for the electronic structure community.

  19. PD5: a general purpose library for primer design software.

    PubMed

    Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda

    2013-01-01

    Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.

  20. Towards a high performance geometry library for particle-detector simulations

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less

  1. Towards a high performance geometry library for particle-detector simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.

    Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less

  2. Neutron Capture gamma ENDF libraries for modeling and identification of neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sleaford, B

    2007-10-29

    There are a number of inaccuracies and data omissions with respect to gammas from neutron capture in the ENDF libraries used as field reference information and by modeling codes used in JTOT. As the use of Active Neutron interrogation methods is expanded, these shortfalls become more acute. A new, more accurate and complete evaluated experimental database of gamma rays (over 35,000 lines for 262 isotopes up to U so far) from thermal neutron capture has recently become available from the IAEA. To my knowledge, none of this new data has been installed in ENDF libraries and disseminated. I propose tomore » upgrade libraries of {sup 184,186}W, {sup 56}Fe, {sup 204,206,207}Pb, {sup 104}Pd, and {sup 19}F the 1st year. This will involve collaboration with Richard Firestone at LBL in evaluating the data and installing it in the libraries. I will test them with the transport code MCNP5.« less

  3. Binary Code Extraction and Interface Identification for Security Applications

    DTIC Science & Technology

    2009-10-02

    the functions extracted during the end-to-end applications and at the bottom some additional functions extracted from the OpenSSL library. fact that as...mentioned in Section 5.1 through Section 5.3 and some additional functions that we extract from the OpenSSL library for evaluation purposes. The... OpenSSL functions, the false positives and negatives are measured by comparison with the original C source code. For the malware samples, no source is

  4. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  5. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  6. RNAi screening of subtracted transcriptomes reveals tumor suppression by taurine-activated GABAA receptors involved in volume regulation

    PubMed Central

    van Nierop, Pim; Vormer, Tinke L.; Foijer, Floris; Verheij, Joanne; Lodder, Johannes C.; Andersen, Jesper B.; Mansvelder, Huibert D.; te Riele, Hein

    2018-01-01

    To identify coding and non-coding suppressor genes of anchorage-independent proliferation by efficient loss-of-function screening, we have developed a method for enzymatic production of low complexity shRNA libraries from subtracted transcriptomes. We produced and screened two LEGO (Low-complexity by Enrichment for Genes shut Off) shRNA libraries that were enriched for shRNA vectors targeting coding and non-coding polyadenylated transcripts that were reduced in transformed Mouse Embryonic Fibroblasts (MEFs). The LEGO shRNA libraries included ~25 shRNA vectors per transcript which limited off-target artifacts. Our method identified 79 coding and non-coding suppressor transcripts. We found that taurine-responsive GABAA receptor subunits, including GABRA5 and GABRB3, were induced during the arrest of non-transformed anchor-deprived MEFs and prevented anchorless proliferation. We show that taurine activates chloride currents through GABAA receptors on MEFs, causing seclusion of cell volume in large membrane protrusions. Volume seclusion from cells by taurine correlated with reduced proliferation and, conversely, suppression of this pathway allowed anchorage-independent proliferation. In human cholangiocarcinomas, we found that several proteins involved in taurine signaling via GABAA receptors were repressed. Low GABRA5 expression typified hyperproliferative tumors, and loss of taurine signaling correlated with reduced patient survival, suggesting this tumor suppressive mechanism operates in vivo. PMID:29787571

  7. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  8. GPU-accelerated simulations of isolated black holes

    NASA Astrophysics Data System (ADS)

    Lewis, Adam G. M.; Pfeiffer, Harald P.

    2018-05-01

    We present a port of the numerical relativity code SpEC which is capable of running on NVIDIA GPUs. Since this code must be maintained in parallel with SpEC itself, a primary design consideration is to perform as few explicit code changes as possible. We therefore rely on a hierarchy of automated porting strategies. At the highest level we use TLoops, a C++ library of our design, to automatically emit CUDA code equivalent to tensorial expressions written into C++ source using a syntax similar to analytic calculation. Next, we trace out and cache explicit matrix representations of the numerous linear transformations in the SpEC code, which allows these to be performed on the GPU using pre-existing matrix-multiplication libraries. We port the few remaining important modules by hand. In this paper we detail the specifics of our port, and present benchmarks of it simulating isolated black hole spacetimes on several generations of NVIDIA GPU.

  9. G STL: the geostatistical template library in C++

    NASA Astrophysics Data System (ADS)

    Remy, Nicolas; Shtuka, Arben; Levy, Bruno; Caers, Jef

    2002-10-01

    The development of geostatistics has been mostly accomplished by application-oriented engineers in the past 20 years. The focus on concrete applications gave birth to many algorithms and computer programs designed to address different issues, such as estimating or simulating a variable while possibly accounting for secondary information such as seismic data, or integrating geological and geometrical data. At the core of any geostatistical data integration methodology is a well-designed algorithm. Yet, despite their obvious differences, all these algorithms share many commonalities on which to build a geostatistics programming library, lest the resulting library is poorly reusable and difficult to expand. Building on this observation, we design a comprehensive, yet flexible and easily reusable library of geostatistics algorithms in C++. The recent advent of the generic programming paradigm allows us elegantly to express the commonalities of the geostatistical algorithms into computer code. Generic programming, also referred to as "programming with concepts", provides a high level of abstraction without loss of efficiency. This last point is a major gain over object-oriented programming which often trades efficiency for abstraction. It is not enough for a numerical library to be reusable, it also has to be fast. Because generic programming is "programming with concepts", the essential step in the library design is the careful identification and thorough definition of these concepts shared by most geostatistical algorithms. Building on these definitions, a generic and expandable code can be developed. To show the advantages of such a generic library, we use G STL to build two sequential simulation programs working on two different types of grids—a surface with faults and an unstructured grid—without requiring any change to the G STL code.

  10. SLHAplus: A library for implementing extensions of the standard model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Christensen, Neil D.; Pukhov, A.; Semenov, A.

    2011-03-01

    We provide a library to facilitate the implementation of new models in codes such as matrix element and event generators or codes for computing dark matter observables. The library contains an SLHA reader routine as well as diagonalisation routines. This library is available in CalcHEP and micrOMEGAs. The implementation of models based on this library is supported by LanHEP and FeynRules. Program summaryProgram title: SLHAplus_1.3 Catalogue identifier: AEHX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6283 No. of bytes in distributed program, including test data, etc.: 52 119 Distribution format: tar.gz Programming language: C Computer: IBM PC, MAC Operating system: UNIX (Linux, Darwin, Cygwin) RAM: 2000 MB Classification: 11.1 Nature of problem: Implementation of extensions of the standard model in matrix element and event generators and codes for dark matter observables. Solution method: For generic extensions of the standard model we provide routines for reading files that adopt the standard format of the SUSY Les Houches Accord (SLHA) file. The procedure has been generalized to take into account an arbitrary number of blocks so that the reader can be used in generic models including non-supersymmetric ones. The library also contains routines to diagonalize real and complex mass matrices with either unitary or bi-unitary transformations as well as routines for evaluating the running strong coupling constant, running quark masses and effective quark masses. Running time: 0.001 sec

  11. Automation at the Fairfax County Virginia Library System.

    ERIC Educational Resources Information Center

    Baker, Alfred W.; And Others

    The Fairfax County Library converted from a card catalog to a book catalog format in 1963. The first book catalogs were produced by the Sequential Card (SC) process. The master cards were prepared by the library and sent to Science Press, where copy was prepared on IBM cards, coded for sequential filing, and photographed to prepare page plates,…

  12. Adaptation of Flux-Corrected Transport Algorithms for Modeling Dusty Flows.

    DTIC Science & Technology

    1983-12-20

    Defense Comunications Agency Olcy Attn XLA Washington, DC 20305 01cy Attn nTW-2 (ADR CNW D I: Attn Code 240 for) Olcy Attn NL-STN O Library Olcy Attn...Library Olcy Attn TIC-Library Olcy Attn R Welch Olcy Attn M Johnson Los Alamos National Scientific Lab. Mail Station 5000 Information Science, Inc. P

  13. 78 FR 23629 - Office of Commercial Space Transportation; Notice of Availability and Request for Comment on the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ... with the National Environmental Policy Act of 1969, as amended (NEPA; 42 United States Code [U.S.C... Public Library Main Branch, 2600 Central Blvd. Southmost Branch Library, 4320 Southmost Blvd. University of Texas at Brownsville, Oliveira Library, 80 Fort Brown St. The FAA will hold a public hearing to...

  14. Library Homepage Design at Smaller Bachelor of Arts Institutions

    ERIC Educational Resources Information Center

    Jones, Scott L.; Leonard, Kirsten

    2011-01-01

    This study examined the homepages of the libraries of 175 smaller bachelor of arts institutions, coding for the presence of 98 design elements. By reporting and examining the frequency of these features, the authors noted what is and is not common practice at these libraries. They found that only fourteen elements were present on at least half of…

  15. Fac-Back-OPAC: An Open Source Interface to Your Library System

    ERIC Educational Resources Information Center

    Beccaria, Mike; Scott, Dan

    2007-01-01

    The new Fac-Back-OPAC (a faceted backup OPAC) is built on code that was originally developed by Casey Durfee in February 2007. It represents the convergence of two prominent trends in library tools: the decoupling of discovery tools from the traditional integrated library system (ILS) and the use of readily available open source components to…

  16. Linear-Algebra Programs

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  17. A large shRNA library approach identifies lncRNA Ntep as an essential regulator of cell proliferation

    PubMed Central

    Beermann, Julia; Kirste, Dominique; Iwanov, Katharina; Lu, Dongchao; Kleemiß, Felix; Kumarswamy, Regalla; Schimmel, Katharina; Bär, Christian; Thum, Thomas

    2018-01-01

    The mammalian cell cycle is a complex and tightly controlled event. Myriads of different control mechanisms are involved in its regulation. Long non-coding RNAs (lncRNA) have emerged as important regulators of many cellular processes including cellular proliferation. However, a more global and unbiased approach to identify lncRNAs with importance for cell proliferation is missing. Here, we present a lentiviral shRNA library-based approach for functional lncRNA profiling. We validated our library approach in NIH3T3 (3T3) fibroblasts by identifying lncRNAs critically involved in cell proliferation. Using stringent selection criteria we identified lncRNA NR_015491.1 out of 3842 different RNA targets represented in our library. We termed this transcript Ntep (non-coding transcript essential for proliferation), as a bona fide lncRNA essential for cell cycle progression. Inhibition of Ntep in 3T3 and primary fibroblasts prevented normal cell growth and expression of key fibroblast markers. Mechanistically, we discovered that Ntep is important to activate P53 concomitant with increased apoptosis and cell cycle blockade in late G2/M. Our findings suggest Ntep to serve as an important regulator of fibroblast proliferation and function. In summary, our study demonstrates the applicability of an innovative shRNA library approach to identify long non-coding RNA functions in a massive parallel approach. PMID:29099486

  18. Radio-nuclide mixture identification using medium energy resolution detectors

    DOEpatents

    Nelson, Karl Einar

    2013-09-17

    According to one embodiment, a method for identifying radio-nuclides includes receiving spectral data, extracting a feature set from the spectral data comparable to a plurality of templates in a template library, and using a branch and bound method to determine a probable template match based on the feature set and templates in the template library. In another embodiment, a device for identifying unknown radio-nuclides includes a processor, a multi-channel analyzer, and a memory operatively coupled to the processor, the memory having computer readable code stored thereon. The computer readable code is configured, when executed by the processor, to receive spectral data, to extract a feature set from the spectral data comparable to a plurality of templates in a template library, and to use a branch and bound method to determine a probable template match based on the feature set and templates in the template library.

  19. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  20. Charon Message-Passing Toolkit for Scientific Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Yan, Jerry (Technical Monitor)

    2000-01-01

    Charon is a library, callable from C and Fortran, that aids the conversion of structured-grid legacy codes-such as those used in the numerical computation of fluid flows-into parallel, high- performance codes. Key are functions that define distributed arrays, that map between distributed and non-distributed arrays, and that allow easy specification of common communications on structured grids. The library is based on the widely accepted MPI message passing standard. We present an overview of the functionality of Charon, and some representative results.

  1. YAP Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Eric M.

    2004-05-20

    The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.

  2. acme: The Amendable Coal-Fire Modeling Exercise. A C++ Class Library for the Numerical Simulation of Coal-Fires

    NASA Astrophysics Data System (ADS)

    Wuttke, Manfred W.

    2017-04-01

    At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in Xinjiang, China, as well as analyze and interpret results from lab experiments.

  3. Landlab: an Open-Source Python Library for Modeling Earth Surface Dynamics

    NASA Astrophysics Data System (ADS)

    Gasparini, N. M.; Adams, J. M.; Hobley, D. E. J.; Hutton, E.; Nudurupati, S. S.; Istanbulluoglu, E.; Tucker, G. E.

    2016-12-01

    Landlab is an open-source Python modeling library that enables users to easily build unique models to explore earth surface dynamics. The Landlab library provides a number of tools and functionalities that are common to many earth surface models, thus eliminating the need for a user to recode fundamental model elements each time she explores a new problem. For example, Landlab provides a gridding engine so that a user can build a uniform or nonuniform grid in one line of code. The library has tools for setting boundary conditions, adding data to a grid, and performing basic operations on the data, such as calculating gradients and curvature. The library also includes a number of process components, which are numerical implementations of physical processes. To create a model, a user creates a grid and couples together process components that act on grid variables. The current library has components for modeling a diverse range of processes, from overland flow generation to bedrock river incision, from soil wetting and drying to vegetation growth, succession and death. The code is freely available for download (https://github.com/landlab/landlab) or can be installed as a Python package. Landlab models can also be built and run on Hydroshare (www.hydroshare.org), an online collaborative environment for sharing hydrologic data, models, and code. Tutorials illustrating a wide range of Landlab capabilities such as building a grid, setting boundary conditions, reading in data, plotting, using components and building models are also available (https://github.com/landlab/tutorials). The code is also comprehensively documented both online and natively in Python. In this presentation, we illustrate the diverse capabilities of Landlab. We highlight existing functionality by illustrating outcomes from a range of models built with Landlab - including applications that explore landscape evolution and ecohydrology. Finally, we describe the range of resources available for new users.

  4. 2,445 Hours of Code: What I Learned from Facilitating Hour of Code Events in High School Libraries

    ERIC Educational Resources Information Center

    Colby, Jennifer

    2015-01-01

    This article describes a school librarian's experience with initiating an Hour of Code event for her school's student body. Hadi Partovi of Code.org conceived the Hour of Code "to get ten million students to try one hour of computer science" (Partovi, 2013a), which is implemented during Computer Science Education Week with a goal of…

  5. Development of a Simulink Library for the Design, Testing and Simulation of Software Defined GPS Radios. With Application to the Development of Parallel Correlator Structures

    DTIC Science & Technology

    2014-05-01

    function Value = Select_Element(Index,Signal) %# eml Value = Signal(Index); Code Listing 1 Code for Selector Block 12 | P a g e 4.3...code for the Simulink function shiftedSignal = fcn(signal,Shift) %# eml shiftedSignal = circshift(signal,Shift); Code Listing 2 Code for CircShift

  6. A digital library of radiology images.

    PubMed

    Kahn, Charles E

    2006-01-01

    A web-based virtual library of peer-reviewed radiological images was created for use in education and clinical decision support. Images were obtained from open-access content of five online radiology journals and one e-learning web site. Figure captions were indexed by Medical Subject Heading (MeSH) codes, imaging modality, and patient age and sex. This digital library provides a new, valuable online resource.

  7. Fission yields data generation and benchmarks of decay heat estimation of a nuclear fuel

    NASA Astrophysics Data System (ADS)

    Gil, Choong-Sup; Kim, Do Heon; Yoo, Jae Kwon; Lee, Jounghwa

    2017-09-01

    Fission yields data with the ENDF-6 format of 235U, 239Pu, and several actinides dependent on incident neutron energies have been generated using the GEF code. In addition, fission yields data libraries of ORIGEN-S, -ARP modules in the SCALE code, have been generated with the new data. The decay heats by ORIGEN-S using the new fission yields data have been calculated and compared with the measured data for validation in this study. The fission yields data ORIGEN-S libraries based on ENDF/B-VII.1, JEFF-3.1.1, and JENDL/FPY-2011 have also been generated, and decay heats were calculated using the ORIGEN-S libraries for analyses and comparisons.

  8. Theoretical Thermal Evaluation of Energy Recovery Incinerators

    DTIC Science & Technology

    1985-12-01

    Army Logistics Mgt Center, Fort Lee , VA DTIC Alexandria, VA DTNSRDC Code 4111 (R. Gierich), Bethesda MD; Code 4120, Annapolis, MD; Code 522 (Library...Washington. DC: Code (I6H4. Washington. DC NAVSECGRUACT PWO (Code .’^O.’^). Winter Harbor. IVIE ; PWO (Code 4(1). Edzell. Scotland; PWO. Adak AK...NEW YORK Fort Schuyler. NY (Longobardi) TEXAS A&M UNIVERSITY W.B. Ledbetter College Station. TX UNIVERSITY OF CALIFORNIA Energy Engineer. Davis CA

  9. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less

  10. UTSW Researchers Identify Potential Therapeutic Targets for High-grade Neuroendocrine Lung Cancers | Office of Cancer Genomics

    Cancer.gov

    Neuroendocrine specific lung cancers comprise about 10% of non-small cell lung cancer (NSCLC) cases and all small cell lung cancer (SCLC) cases. Studies have previously shown that the transcription factor achaete-scute homolog 1 (ASCL1) is a cancer “lineage” factor required for the development and survival of SCLC, and is highly expressed in neuroendocrine-specific NSCLC (NE-NSCLC).

  11. Certification of medical librarians, 1949--1977 statistical analysis.

    PubMed

    Schmidt, D

    1979-01-01

    The Medical Library Association's Code for Training and Certification of Medical Librarians was in effect from 1949 to August 1977, a period during which 3,216 individuals were certified. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on MLA membership, sex, residence, library school, and method of meeting requirements are detailed. Questions relating to certification under the code now in existence are raised.

  12. Certification of medical librarians, 1949--1977 statistical analysis.

    PubMed Central

    Schmidt, D

    1979-01-01

    The Medical Library Association's Code for Training and Certification of Medical Librarians was in effect from 1949 to August 1977, a period during which 3,216 individuals were certified. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on each type of certificate granted each year are provided. Because 54.5% of those granted certification were awarded it in the last three-year, two-month period of the code's existence, these applications are reviewed in greater detail. Statistics on MLA membership, sex, residence, library school, and method of meeting requirements are detailed. Questions relating to certification under the code now in existence are raised. PMID:427287

  13. Libraries as Facilitators of Coding for All

    ERIC Educational Resources Information Center

    Martin, Crystle

    2017-01-01

    Learning to code has been an increasingly frequent topic of conversation both in academic circles and popular media. Learning to code recently received renewed attention with the announcement of the White House's Computer Science for All initiative (Smith 2016). This initiative intends "to empower all American students from kindergarten…

  14. Flight Software Math Library

    NASA Technical Reports Server (NTRS)

    McComas, David

    2013-01-01

    The flight software (FSW) math library is a collection of reusable math components that provides typical math utilities required by spacecraft flight software. These utilities are intended to increase flight software quality reusability and maintainability by providing a set of consistent, well-documented, and tested math utilities. This library only has dependencies on ANSI C, so it is easily ported. Prior to this library, each mission typically created its own math utilities using ideas/code from previous missions. Part of the reason for this is that math libraries can be written with different strategies in areas like error handling, parameters orders, naming conventions, etc. Changing the utilities for each mission introduces risks and costs. The obvious risks and costs are that the utilities must be coded and revalidated. The hidden risks and costs arise in miscommunication between engineers. These utilities must be understood by both the flight software engineers and other subsystem engineers (primarily guidance navigation and control). The FSW math library is part of a larger goal to produce a library of reusable Guidance Navigation and Control (GN&C) FSW components. A GN&C FSW library cannot be created unless a standardized math basis is created. This library solves the standardization problem by defining a common feature set and establishing policies for the library s design. This allows the libraries to be maintained with the same strategy used in its initial development, which supports a library of reusable GN&C FSW components. The FSW math library is written for an embedded software environment in C. This places restrictions on the language features that can be used by the library. Another advantage of the FSW math library is that it can be used in the FSW as well as other environments like the GN&C analyst s simulators. This helps communication between the teams because they can use the same utilities with the same feature set and syntax.

  15. 77 FR 35351 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    .... Agricultural Research Service Title: Information Collection for Document Delivery Services. OMB Control Number: 0518-0027. Summary of Collection: The National Agricultural Library (NAL) accepts requests from libraries and other organizations in accordance with the national and international interlibrary loan code...

  16. Informatic and genomic analysis of melanocyte cDNA libraries as a resource for the study of melanocyte development and function.

    PubMed

    Baxter, Laura L; Hsu, Benjamin J; Umayam, Lowell; Wolfsberg, Tyra G; Larson, Denise M; Frith, Martin C; Kawai, Jun; Hayashizaki, Yoshihide; Carninci, Piero; Pavan, William J

    2007-06-01

    As part of the RIKEN mouse encyclopedia project, two cDNA libraries were prepared from melanocyte-derived cell lines, using techniques of full-length clone selection and subtraction/normalization to enrich for rare transcripts. End sequencing showed that these libraries display over 83% complete coding sequence at the 5' end and 96-97% complete coding sequence at the 3' end. Evaluation of the libraries, derived from B16F10Y tumor cells and melan-c cells, revealed that they contain clones for a majority of the genes previously demonstrated to function in melanocyte biology. Analysis of genomic locations for transcripts revealed that the distribution of melanocyte genes is non-random throughout the genome. Three genomic regions identified that showed significant clustering of melanocyte-expressed genes contain one or more genes previously shown to regulate melanocyte development or function. A catalog of genes expressed in these libraries is presented, providing a valuable resource of cDNA clones and sequence information that can be used for identification of new genes important for melanocyte development, function, and disease.

  17. Kokkos: Enabling manycore performance portability through polymorphic memory access patterns

    DOE PAGES

    Carter Edwards, H.; Trott, Christian R.; Sunderland, Daniel

    2014-07-22

    The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diversemore » manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.« less

  18. CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.

    PubMed

    Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H

    2016-11-14

    The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.

  19. Sequence-independent construction of ordered combinatorial libraries with predefined crossover points.

    PubMed

    Jézéquel, Laetitia; Loeper, Jacqueline; Pompon, Denis

    2008-11-01

    Combinatorial libraries coding for mosaic enzymes with predefined crossover points constitute useful tools to address and model structure-function relationships and for functional optimization of enzymes based on multivariate statistics. The presented method, called sequence-independent generation of a chimera-ordered library (SIGNAL), allows easy shuffling of any predefined amino acid segment between two or more proteins. This method is particularly well adapted to the exchange of protein structural modules. The procedure could also be well suited to generate ordered combinatorial libraries independent of sequence similarities in a robotized manner. Sequence segments to be recombined are first extracted by PCR from a single-stranded template coding for an enzyme of interest using a biotin-avidin-based method. This technique allows the reduction of parental template contamination in the final library. Specific PCR primers allow amplification of two complementary mosaic DNA fragments, overlapping in the region to be exchanged. Fragments are finally reassembled using a fusion PCR. The process is illustrated via the construction of a set of mosaic CYP2B enzymes using this highly modular approach.

  20. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  1. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  2. Adagio 4.20 User’s Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin Whiting; Crane, Nathan K.; Heinstein, Martin W.

    2011-03-01

    Adagio is a Lagrangian, three-dimensional, implicit code for the analysis of solids and structures. It uses a multi-level iterative solver, which enables it to solve problems with large deformations, nonlinear material behavior, and contact. It also has a versatile library of continuum and structural elements, and an extensive library of material models. Adagio is written for parallel computing environments, and its solvers allow for scalable solutions of very large problems. Adagio uses the SIERRA Framework, which allows for coupling with other SIERRA mechanics codes. This document describes the functionality and input structure for Adagio.

  3. 36 CFR § 1200.7 - What are NARA logos and how are they used?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Archival Research Catalog; ER11MY04.004 (6) The Archives Library Information Center; ER11MY04.005 (7) Presidential Libraries; ER11MY04.006 (8) Federal Register publications. (i) Electronic Code of Federal...

  4. GAME: GAlaxy Machine learning for Emission lines

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.

    2018-06-01

    We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.

  5. Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction

    NASA Astrophysics Data System (ADS)

    Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.

    2013-12-01

    We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.

  6. High-Energy Activation Simulation Coupling TENDL and SPACS with FISPACT-II

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark

    2018-06-01

    To address the needs of activation-transmutation simulation in incident-particle fields with energies above a few hundred MeV, the FISPACT-II code has been extended to splice TENDL standard ENDF-6 nuclear data with extended nuclear data forms. The JENDL-2007/HE and HEAD-2009 libraries were processed for FISPACT-II and used to demonstrate the capabilities of the new code version. Tests of the libraries and comparisons against both experimental yield data and the most recent intra-nuclear cascade model results demonstrate that there is need for improved nuclear data libraries up to and above 1 GeV. Simulations on lead targets show that important radionuclides, such as 148Gd, can vary by more than an order of magnitude where more advanced models find agreement within the experimental uncertainties.

  7. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-01-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  8. A look at scalable dense linear algebra libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.

    1992-08-01

    We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less

  9. Kiwi: An Evaluated Library of Uncertainties in Nuclear Data and Package for Nuclear Sensitivity Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruet, J

    2007-06-23

    This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directorymore » structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.« less

  10. Faunus: An object oriented framework for molecular simulation

    PubMed Central

    Lund, Mikael; Trulsson, Martin; Persson, Björn

    2008-01-01

    Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331

  11. KAOS/LIB-V: A library of nuclear response functions generated by KAOS-V code from ENDF/B-V and other data files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farawila, Y.; Gohar, Y.; Maynard, C.

    1989-04-01

    KAOS/LIB-V: A library of processed nuclear responses for neutronics analyses of nuclear systems has been generated. The library was prepared using the KAOS-V code and nuclear data from ENDF/B-V. The library includes kerma (kinetic energy released in materials) factors and other nuclear response functions for all materials presently of interest in fusion and fission applications for 43 nonfissionable and 15 fissionable isotopes and elements. The nuclear response functions include gas production and tritium-breeding functions, and all important reaction cross sections. KAOS/LIB-V employs the VITAMIN-E weighting function and energy group structure of 174 neutron groups. Auxiliary nuclear data bases, e.g., themore » Japanese evaluated nuclear data library JENDL-2 were used as a source of isotopic cross sections when these data are not provided in ENDF/B-V files for a natural element. These are needed mainly to estimate average quantities such as effective Q-values for the natural element. This analysis of local energy deposition was instrumental in detecting and understanding energy balance deficiencies and other problems in the ENDF/B-V data. Pertinent information about the library and a graphical display of the main nuclear response functions for all materials in the library are given. 35 refs.« less

  12. Automatic Publishing of Library Bulletins.

    ERIC Educational Resources Information Center

    Inbal, Moshe

    1980-01-01

    Describes the use of a computer to publish library bulletins that list recent accessions of technical reports according to the subject classification scheme of NTIS/SRIM (National Technical Information Service's Scientific Reports in Microfiche). The codes file, the four computer program functions, and costs/economy are discussed. (JD)

  13. S2PLOT: Three-dimensional (3D) Plotting Library

    NASA Astrophysics Data System (ADS)

    Barnes, D. G.; Fluke, C. J.; Bourke, P. D.; Parry, O. T.

    2011-03-01

    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - S2PLOT - is written in C and can be used by C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX systems. S2PLOT draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a PGPLOT inspired interface, S2PLOT provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The S2PLOT architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce S2PLOT to the astronomical community, describe its potential applications, and present some example uses of the library.

  14. An Advanced, Three-Dimensional Plotting Library for Astronomy

    NASA Astrophysics Data System (ADS)

    Barnes, David G.; Fluke, Christopher J.; Bourke, Paul D.; Parry, Owen T.

    2006-07-01

    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - s2plot - is written in c and can be used by c, c++, and fortran programs on GNU/Linux and Apple/OSX systems. s2plot draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a pgplot-inspired interface, s2plot provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The s2plot architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce s2plot to the astronomical community, describe its potential applications, and present some example uses of the library.

  15. HDF-EOS 2 and HDF-EOS 5 Compatibility Library

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    The HDF-EOS 2 and HDF-EOS 5 Compatibility Library contains C-language functions that provide uniform access to HDF-EOS 2 and HDF-EOS 5 files through one set of application programming interface (API) calls. ("HDFEOS 2" and "HDF-EOS 5" are defined in the immediately preceding article.) Without this library, differences between the APIs of HDF-EOS 2 and HDF-EOS 5 would necessitate writing of different programs to cover HDF-EOS 2 and HDF-EOS 5. The API associated with this library is denoted "he25." For nearly every HDF-EOS 5 API call, there is a corresponding he25 API call. If a file in question is in the HDF-EOS 5 format, the code reverts to the corresponding HDF-EOS 5 call; if the file is in the HDF-EOS 2 format, the code translates the arguments to HDF-EOS 2 equivalents (if necessary), calls the HDFEOS 2 call, and retranslates the results back to HDF-EOS 5 (if necessary).

  16. A novel process of viral vector barcoding and library preparation enables high-diversity library generation and recombination-free paired-end sequencing

    PubMed Central

    Davidsson, Marcus; Diaz-Fernandez, Paula; Schwich, Oliver D.; Torroba, Marcos; Wang, Gang; Björklund, Tomas

    2016-01-01

    Detailed characterization and mapping of oligonucleotide function in vivo is generally a very time consuming effort that only allows for hypothesis driven subsampling of the full sequence to be analysed. Recent advances in deep sequencing together with highly efficient parallel oligonucleotide synthesis and cloning techniques have, however, opened up for entirely new ways to map genetic function in vivo. Here we present a novel, optimized protocol for the generation of universally applicable, barcode labelled, plasmid libraries. The libraries are designed to enable the production of viral vector preparations assessing coding or non-coding RNA function in vivo. When generating high diversity libraries, it is a challenge to achieve efficient cloning, unambiguous barcoding and detailed characterization using low-cost sequencing technologies. With the presented protocol, diversity of above 3 million uniquely barcoded adeno-associated viral (AAV) plasmids can be achieved in a single reaction through a process achievable in any molecular biology laboratory. This approach opens up for a multitude of in vivo assessments from the evaluation of enhancer and promoter regions to the optimization of genome editing. The generated plasmid libraries are also useful for validation of sequencing clustering algorithms and we here validate the newly presented message passing clustering process named Starcode. PMID:27874090

  17. Brachytherapy dosimetry of 125I and 103Pd sources using an updated cross section library for the MCNP Monte Carlo transport code.

    PubMed

    Bohm, Tim D; DeLuca, Paul M; DeWerd, Larry A

    2003-04-01

    Permanent implantation of low energy (20-40 keV) photon emitting radioactive seeds to treat prostate cancer is an important treatment option for patients. In order to produce accurate implant brachytherapy treatment plans, the dosimetry of a single source must be well characterized. Monte Carlo based transport calculations can be used for source characterization, but must have up to date cross section libraries to produce accurate dosimetry results. This work benchmarks the MCNP code and its photon cross section library for low energy photon brachytherapy applications. In particular, we calculate the emitted photon spectrum, air kerma, depth dose in water, and radial dose function for both 125I and 103Pd based seeds and compare to other published results. Our results show that MCNP's cross section library differs from recent data primarily in the photoelectric cross section for low energies and low atomic number materials. In water, differences as large as 10% in the photoelectric cross section and 6% in the total cross section occur at 125I and 103Pd photon energies. This leads to differences in the dose rate constant of 3% and 5%, and differences as large as 18% and 20% in the radial dose function for the 125I and 103Pd based seeds, respectively. Using a partially updated photon library, calculations of the dose rate constant and radial dose function agree with other published results. Further, the use of the updated photon library allows us to verify air kerma and depth dose in water calculations performed using MCNP's perturbation feature to simulate updated cross sections. We conclude that in order to most effectively use MCNP for low energy photon brachytherapy applications, we must update its cross section library. Following this update, the MCNP code system will be a very effective tool for low energy photon brachytherapy dosimetry applications.

  18. Health sciences librarians' awareness and assessment of the Medical Library Association Code of Ethics for Health Sciences Librarianship: the results of a membership survey.

    PubMed

    Byrd, Gary D; Devine, Patricia J; Corcoran, Kate E

    2014-10-01

    The Medical Library Association (MLA) Board of Directors and president charged an Ethical Awareness Task Force and recommended a survey to determine MLA members' awareness of and opinions about the current Code of Ethics for Health Sciences Librarianship. THE TASK FORCE AND MLA STAFF CRAFTED A SURVEY TO DETERMINE: (1) awareness of the MLA code and its provisions, (2) use of the MLA code to resolve professional ethical issues, (3) consultation of other ethical codes or guides, (4) views regarding the relative importance of the eleven MLA code statements, (5) challenges experienced in following any MLA code provisions, and (6) ethical problems not clearly addressed by the code. Over 500 members responded (similar to previous MLA surveys), and while most were aware of the code, over 30% could not remember when they had last read or thought about it, and nearly half had also referred to other codes or guidelines. The large majority thought that: (1) all code statements were equally important, (2) none were particularly difficult or challenging to follow, and (3) the code covered every ethical challenge encountered in their professional work. Comments provided by respondents who disagreed with the majority views suggest that the MLA code could usefully include a supplementary guide with practical advice on how to reason through a number of ethically challenging situations that are typically encountered by health sciences librarians.

  19. Health sciences librarians' awareness and assessment of the Medical Library Association Code of Ethics for Health Sciences Librarianship: the results of a membership survey

    PubMed Central

    Byrd, Gary D.; Devine, Patricia J.; Corcoran, Kate E.

    2014-01-01

    Objective: The Medical Library Association (MLA) Board of Directors and president charged an Ethical Awareness Task Force and recommended a survey to determine MLA members' awareness of and opinions about the current Code of Ethics for Health Sciences Librarianship. Methods: The task force and MLA staff crafted a survey to determine: (1) awareness of the MLA code and its provisions, (2) use of the MLA code to resolve professional ethical issues, (3) consultation of other ethical codes or guides, (4) views regarding the relative importance of the eleven MLA code statements, (5) challenges experienced in following any MLA code provisions, and (6) ethical problems not clearly addressed by the code. Results: Over 500 members responded (similar to previous MLA surveys), and while most were aware of the code, over 30% could not remember when they had last read or thought about it, and nearly half had also referred to other codes or guidelines. The large majority thought that: (1) all code statements were equally important, (2) none were particularly difficult or challenging to follow, and (3) the code covered every ethical challenge encountered in their professional work. Implications: Comments provided by respondents who disagreed with the majority views suggest that the MLA code could usefully include a supplementary guide with practical advice on how to reason through a number of ethically challenging situations that are typically encountered by health sciences librarians. PMID:25349544

  20. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    NASA Astrophysics Data System (ADS)

    Iwamoto, Yosuke; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for 72Ge, 75As, 89Y, and 109Ag in the ENDF/B-VII.1 library, and for 90Zr and 55Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  1. JEFF-3.1, ENDF/B-VII and JENDL-3.3 Critical Assemblies Benchmarking With the Monte Carlo Code TRIPOLI

    NASA Astrophysics Data System (ADS)

    Sublet, Jean-Christophe

    2008-02-01

    ENDF/B-VII.0, the first release of the ENDF/B-VII nuclear data library, was formally released in December 2006. Prior to this event the European JEFF-3.1 nuclear data library was distributed in April 2005, while the Japanese JENDL-3.3 library has been available since 2002. The recent releases of these neutron transport libraries and special purpose files, the updates of the processing tools and the significant progress in computer power and potency, allow today far better leaner Monte Carlo code and pointwise library integration leading to enhanced benchmarking studies. A TRIPOLI-4.4 critical assembly suite has been set up as a collection of 86 benchmarks taken principally from the International Handbook of Evaluated Criticality Benchmarks Experiments (2006 Edition). It contains cases for a variety of U and Pu fuels and systems, ranging from fast to deep thermal solutions and assemblies. It covers cases with a variety of moderators, reflectors, absorbers, spectra and geometries. The results presented show that while the most recent library ENDF/B-VII.0, which benefited from the timely development of JENDL-3.3 and JEFF-3.1, produces better overall results, it suggest clearly also that improvements are still needed. This is true in particular in Light Water Reactor applications for thermal and epithermal plutonium data for all libraries and fast uranium data for JEFF-3.1 and JENDL-3.3. It is also true to state that other domains, in which Monte Carlo code are been used, such as astrophysics, fusion, high-energy or medical, radiation transport in general benefit notably from such enhanced libraries. It is particularly noticeable in term of the number of isotopes, materials available, the overall quality of the data and the much broader energy range for which evaluated (as opposed to modeled) data are available, spanning from meV to hundreds of MeV. In pointing out the impact of the different nuclear data at the library but also the isotopic levels one could not help noticing the importance and difference of the compensating effects that result from their single usage. Library differences are still important but tend to diminish due to the ever increasing and beneficial worldwide collaboration in the field of nuclear data measurement and evaluations.

  2. CRISPR library designer (CLD): software for multispecies design of single guide RNA libraries.

    PubMed

    Heigwer, Florian; Zhan, Tianzuo; Breinig, Marco; Winter, Jan; Brügemann, Dirk; Leible, Svenja; Boutros, Michael

    2016-03-24

    Genetic screens using CRISPR/Cas9 are a powerful method for the functional analysis of genomes. Here we describe CRISPR library designer (CLD), an integrated bioinformatics application for the design of custom single guide RNA (sgRNA) libraries for all organisms with annotated genomes. CLD is suitable for the design of libraries using modified CRISPR enzymes and targeting non-coding regions. To demonstrate its utility, we perform a pooled screen for modulators of the TNF-related apoptosis inducing ligand (TRAIL) pathway using a custom library of 12,471 sgRNAs. CLD predicts a high fraction of functional sgRNAs and is publicly available at https://github.com/boutroslab/cld.

  3. A comparative analysis of moral principles and behavioral norms in eight ethical codes relevant to health sciences librarianship, medical informatics, and the health professions.

    PubMed

    Byrd, Gary D; Winkelstein, Peter

    2014-10-01

    Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Using the "Principlism" framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies.

  4. A comparative analysis of moral principles and behavioral norms in eight ethical codes relevant to health sciences librarianship, medical informatics, and the health professions

    PubMed Central

    Byrd, Gary D.; Winkelstein, Peter

    2014-01-01

    Objective: Based on the authors' shared interest in the interprofessional challenges surrounding health information management, this study explores the degree to which librarians, informatics professionals, and core health professionals in medicine, nursing, and public health share common ethical behavior norms grounded in moral principles. Methods: Using the “Principlism” framework from a widely cited textbook of biomedical ethics, the authors analyze the statements in the ethical codes for associations of librarians (Medical Library Association [MLA], American Library Association, and Special Libraries Association), informatics professionals (American Medical Informatics Association [AMIA] and American Health Information Management Association), and core health professionals (American Medical Association, American Nurses Association, and American Public Health Association). This analysis focuses on whether and how the statements in these eight codes specify core moral norms (Autonomy, Beneficence, Non-Maleficence, and Justice), core behavioral norms (Veracity, Privacy, Confidentiality, and Fidelity), and other norms that are empirically derived from the code statements. Results: These eight ethical codes share a large number of common behavioral norms based most frequently on the principle of Beneficence, then on Autonomy and Justice, but rarely on Non-Maleficence. The MLA and AMIA codes share the largest number of common behavioral norms, and these two associations also share many norms with the other six associations. Implications: The shared core of behavioral norms among these professions, all grounded in core moral principles, point to many opportunities for building effective interprofessional communication and collaboration regarding the development, management, and use of health information resources and technologies. PMID:25349543

  5. Construction of Hierarchical Models for Fluid Dynamics in Earth and Planetary Sciences : DCMODEL project

    NASA Astrophysics Data System (ADS)

    Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.

    2012-12-01

    Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006) is a collection of various sample programs using ``SPML''. These sample programs provide the basekit for simple numerical experiments of geophysical fluid dynamics. For example, SPMODEL includes 1-dimensional KdV equation model, 2-dimensional barotropic, shallow water, Boussinesq models, 3-dimensional MHD dynamo models in rotating spherical shells. These models are written in the common style in harmony with SPML functions. ``Deepconv'' (Sugiyama et al., 2010) and ``Dcpam'' are a cloud resolving model and a general circulation model for the purpose of applications to the planetary atmospheres, respectively. ``Deepconv'' includes several physical processes appropriate for simulations of Jupiter and Mars atmospheres, while ``Dcpam'' does for simulations of Earth, Mars, and Venus-like atmospheres. ``Rdoc-f95'' is a automatic generator of reference manuals of Fortran90/95 programs, which is an extension of ruby documentation tool kit ``rdoc''. It analyzes dependency of modules, functions, and subroutines in the multiple program source codes. At the same time, it can list up the namelist variables in the programs.

  6. Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses.

    PubMed

    Kumar, Manoj; Vijayakumar, A; Rosen, Joseph

    2017-09-14

    We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.

  7. Cost savings through multimission code reuse for Mars image products

    NASA Technical Reports Server (NTRS)

    Deen, R. G.

    2003-01-01

    An overview of the library's design will be presented, along with mission adaptation experiences and lessons learned, and the kinds of additional functionality that have been added while still retaining its multimission character. The application programs using the library will also be briefly described.

  8. Beam Instrument Development System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOOLITTLE, LAWRENCE; HUANG, GANG; DU, QIANG

    Beam Instrumentation Development System (BIDS) is a collection of common support libraries and modules developed during a series of Low-Level Radio Frequency (LLRF) control and timing/synchronization projects. BIDS includes a collection of Hardware Description Language (HDL) libraries and software libraries. The BIDS can be used for the development of any FPGA-based system, such as LLRF controllers. HDL code in this library is generic and supports common Digital Signal Processing (DSP) functions, FPGA-specific drivers (high-speed serial link wrappers, clock generation, etc.), ADC/DAC drivers, Ethernet MAC implementation, etc.

  9. Proposal for a CLIPS software library

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1991-01-01

    This paper is a proposal to create a software library for the C Language Integrated Production System (CLIPS) expert system shell developed by NASA. Many innovative ideas for extending CLIPS were presented at the First CLIPS Users Conference, including useful user and database interfaces. CLIPS developers would benefit from a software library of reusable code. The CLIPS Users Group should establish a software library-- a course of action to make that happen is proposed. Open discussion to revise this library concept is essential, since only a group effort is likely to succeed. A response form intended to solicit opinions and support from the CLIPS community is included.

  10. Intel NX to PVM 3.2 message passing conversion library

    NASA Technical Reports Server (NTRS)

    Arthur, Trey; Nelson, Michael L.

    1993-01-01

    NASA Langley Research Center has developed a library that allows Intel NX message passing codes to be executed under the more popular and widely supported Parallel Virtual Machine (PVM) message passing library. PVM was developed at Oak Ridge National Labs and has become the defacto standard for message passing. This library will allow the many programs that were developed on the Intel iPSC/860 or Intel Paragon in a Single Program Multiple Data (SPMD) design to be ported to the numerous architectures that PVM (version 3.2) supports. Also, the library adds global operations capability to PVM. A familiarity with Intel NX and PVM message passing is assumed.

  11. Peptide library synthesis on spectrally encoded beads for multiplexed protein/peptide bioassays

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy Q.; Brower, Kara; Harink, Björn; Baxter, Brian; Thorn, Kurt S.; Fordyce, Polly M.

    2017-02-01

    Protein-peptide interactions are essential for cellular responses. Despite their importance, these interactions remain largely uncharacterized due to experimental challenges associated with their measurement. Current techniques (e.g. surface plasmon resonance, fluorescence polarization, and isothermal calorimetry) either require large amounts of purified material or direct fluorescent labeling, making high-throughput measurements laborious and expensive. In this report, we present a new technology for measuring antibody-peptide interactions in vitro that leverages spectrally encoded beads for biological multiplexing. Specific peptide sequences are synthesized directly on encoded beads with a 1:1 relationship between peptide sequence and embedded code, thereby making it possible to track many peptide sequences throughout the course of an experiment within a single small volume. We demonstrate the potential of these bead-bound peptide libraries by: (1) creating a set of 46 peptides composed of 3 commonly used epitope tags (myc, FLAG, and HA) and single amino-acid scanning mutants; (2) incubating with a mixture of fluorescently-labeled antimyc, anti-FLAG, and anti-HA antibodies; and (3) imaging these bead-bound libraries to simultaneously identify the embedded spectral code (and thus the sequence of the associated peptide) and quantify the amount of each antibody bound. To our knowledge, these data demonstrate the first customized peptide library synthesized directly on spectrally encoded beads. While the implementation of the technology provided here is a high-affinity antibody/protein interaction with a small code space, we believe this platform can be broadly applicable to any range of peptide screening applications, with the capability to multiplex into libraries of hundreds to thousands of peptides in a single assay.

  12. BOXER: Fine-flux Cross Section Condensation, 2D Few Group Diffusion and Transport Burnup Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2010-02-01

    Neutron transport, calculation of multiplication factor and neutron fluxes in 2-D configurations: cell calculations, 2-D diffusion and transport, and burnup. Preparation of a cross section library for the code BOXER from a basic library in ENDF/B format (ETOBOX).

  13. Caring for Your Tribe

    ERIC Educational Resources Information Center

    Munde, Gail

    2012-01-01

    Librarianship places an ethical demand on practitioners to put patrons or library users' interests before self-interest, and indeed, this is the hallmark of any service profession. But what obligation do school librarians have to their peer librarians and educators? The Code of Ethics of the American Library Association offers this principle, "We…

  14. Apocalypse Soon? The Bug.

    ERIC Educational Resources Information Center

    Clyde, Anne

    1999-01-01

    Discussion of the Year 2000 (Y2K) problem, the computer-code problem that affects computer programs or computer chips, focuses on the impact on teacher-librarians. Topics include automated library systems, access to online information services, library computers and software, and other electronic equipment such as photocopiers and fax machines.…

  15. 75 FR 32231 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-07

    ... for configured tape library storage equipment. SUMMARY: The U.S. Small Business Administration (SBA... Storage Equipment. SBA is initiating a request that an class waiver be granted for Configured Tape Library Storage Equipment, Product Service Code (PSC) 7025 Automated Data Processing (ADP) Input/ Output and...

  16. Browsing Your Virtual Library: The Case of Expanding Universe.

    ERIC Educational Resources Information Center

    Daniels, Wayne; Enright, Jeanne; Mackenzie, Scott

    1997-01-01

    Describes "Expanding Universe: a classified search tool for amateur astronomy," a Web site maintained by the Metropolitan Toronto Reference Library which uses a modified form of the Dewey Decimal Classification to organize a large file of astronomy hotlinks. Highlights include structure, HTML coding, design requirements, and future…

  17. CESAR5.3: Isotopic depletion for Research and Testing Reactor decommissioning

    NASA Astrophysics Data System (ADS)

    Ritter, Guillaume; Eschbach, Romain; Girieud, Richard; Soulard, Maxime

    2018-05-01

    CESAR stands in French for "simplified depletion applied to reprocessing". The current version is now number 5.3 as it started 30 years ago from a long lasting cooperation with ORANO, co-owner of the code with CEA. This computer code can characterize several types of nuclear fuel assemblies, from the most regular PWR power plants to the most unexpected gas cooled and graphite moderated old timer research facility. Each type of fuel can also include numerous ranges of compositions like UOX, MOX, LEU or HEU. Such versatility comes from a broad catalog of cross section libraries, each corresponding to a specific reactor and fuel matrix design. CESAR goes beyond fuel characterization and can also provide an evaluation of structural materials activation. The cross-sections libraries are generated using the most refined assembly or core level transport code calculation schemes (CEA APOLLO2 or ERANOS), based on the European JEFF3.1.1 nuclear data base. Each new CESAR self shielded cross section library benefits all most recent CEA recommendations as for deterministic physics options. Resulting cross sections are organized as a function of burn up and initial fuel enrichment which allows to condensate this costly process into a series of Legendre polynomials. The final outcome is a fast, accurate and compact CESAR cross section library. Each library is fully validated, against a stochastic transport code (CEA TRIPOLI 4) if needed and against a reference depletion code (CEA DARWIN). Using CESAR does not require any of the neutron physics expertise implemented into cross section libraries generation. It is based on top quality nuclear data (JEFF3.1.1 for ˜400 isotopes) and includes up to date Bateman equation solving algorithms. However, defining a CESAR computation case can be very straightforward. Most results are only 3 steps away from any beginner's ambition: Initial composition, in core depletion and pool decay scenario. On top of a simple utilization architecture, CESAR includes a portable Graphical User Interface which can be broadly deployed in R&D or industrial facilities. Aging facilities currently face decommissioning and dismantling issues. This way to the end of the nuclear fuel cycle requires a careful assessment of source terms in the fuel, core structures and all parts of a facility that must be disposed of with "industrial nuclear" constraints. In that perspective, several CESAR cross section libraries were constructed for early CEA Research and Testing Reactors (RTR's). The aim of this paper is to describe how CESAR operates and how it can be used to help these facilities care for waste disposal, nuclear materials transport or basic safety cases. The test case will be based on the PHEBUS Facility located at CEA - Cadarache.

  18. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  19. BUGJEFF311.BOLIB (JEFF-3.1.1) and BUGENDF70.BOLIB (ENDF/B-VII.0) - Generation Methodology and Preliminary Testing of two ENEA-Bologna Group Cross Section Libraries for LWR Shielding and Pressure Vessel Dosimetry

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Sinitsa, Valentin; Orsi, Roberto; Frisoni, Manuela

    2016-02-01

    Two broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format, dedicated to LWR shielding and pressure vessel dosimetry applications, were generated following the methodology recommended by the US ANSI/ANS-6.1.2-1999 (R2009) standard. These libraries, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, are respectively based on JEFF-3.1.1 and ENDF/B-VII.0 nuclear data and adopt the same broad-group energy structure (47 n + 20 γ) of the ORNL BUGLE-96 similar library. They were respectively obtained from the ENEA-Bologna VITJEFF311.BOLIB and VITENDF70.BOLIB libraries in AMPX format for nuclear fission applications through problem-dependent cross section collapsing with the ENEA-Bologna 2007 revision of the ORNL SCAMPI nuclear data processing system. Both previous libraries are based on the Bondarenko self-shielding factor method and have the same AMPX format and fine-group energy structure (199 n + 42 γ) as the ORNL VITAMIN-B6 similar library from which BUGLE-96 was obtained at ORNL. A synthesis of a preliminary validation of the cited BUGLE-type libraries, performed through 3D fixed source transport calculations with the ORNL TORT-3.2 SN code, is included. The calculations were dedicated to the PCA-Replica 12/13 and VENUS-3 engineering neutron shielding benchmark experiments, specifically conceived to test the accuracy of nuclear data and transport codes in LWR shielding and radiation damage analyses.

  20. NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1995-01-01

    Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Marck, S. C.

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differencesmore » are probably caused by elements such as Be, C, Fe, Zr, W. (authors)« less

  2. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  3. Dominant genetics using a yeast genomic library under the control of a strong inducible promoter.

    PubMed

    Ramer, S W; Elledge, S J; Davis, R W

    1992-12-01

    In Saccharomyces cerevisiae, numerous genes have been identified by selection from high-copy-number libraries based on "multicopy suppression" or other phenotypic consequences of overexpression. Although fruitful, this approach suffers from two major drawbacks. First, high copy number alone may not permit high-level expression of tightly regulated genes. Conversely, other genes expressed in proportion to dosage cannot be identified if their products are toxic at elevated levels. This work reports construction of a genomic DNA expression library for S. cerevisiae that circumvents both limitations by fusing randomly sheared genomic DNA to the strong, inducible yeast GAL1 promoter, which can be regulated by carbon source. The library obtained contains 5 x 10(7) independent recombinants, representing a breakpoint at every base in the yeast genome. This library was used to examine aberrant gene expression in S. cerevisiae. A screen for dominant activators of yeast mating response identified eight genes that activate the pathway in the absence of exogenous mating pheromone, including one previously unidentified gene. One activator was a truncated STE11 gene lacking approximately 1000 base pairs of amino-terminal coding sequence. In two different clones, the same GAL1 promoter-proximal ATG is in-frame with the coding sequence of STE11, suggesting that internal initiation of translation there results in production of a biologically active, truncated STE11 protein. Thus this library allows isolation based on dominant phenotypes of genes that might have been difficult or impossible to isolate from high-copy-number libraries.

  4. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  5. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  6. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, David; Klise, Katherine A.

    The PyEPANET package is a set of commands for the Python programming language that are built to wrap the EPANET toolkit library commands, without requiring the end user to program using the ctypes package. This package does not contain the EPANET code, nor does it implement the functions within the EPANET software, and it requires the separately downloaded or compiled EPANET2 toolkit dynamic library (epanet.dll, libepanent.so, or epanet.dylib) and/or the EPANET-MSX dynamic library in order to function.

  8. Content Analysis of Virtual Reference Data: Reshaping Library Website Design.

    PubMed

    Fan, Suhua Caroline; Welch, Jennifer M

    2016-01-01

    An academic health sciences library wanted to redesign its website to provide better access to health information in the community. Virtual reference data were used to provide information about user searching behavior. This study analyzed three years (2012-2014) of virtual reference data, including e-mail questions, text messaging, and live chat transcripts, to evaluate the library website for redesigning, especially in areas such as the home page, patrons' terminology, and issues prompting patrons to ask for help. A coding system based on information links in the current library website was created to analyze the data.

  9. Copyright Policy and Practice in Electronic Reserves among ARL Libraries

    ERIC Educational Resources Information Center

    Hansen, David R.; Cross, William M.; Edwards, Phillip M.

    2013-01-01

    This paper presents the results of a survey of 110 ARL institutions regarding their copyright policies for providing electronic reserves. It compiles descriptive statistics on library practice as well as coding responses to reveal trends and shared practices. Finally, it presents conclusions about policy making, decision making and risk aversion…

  10. Documenting the Conversation: A Systematic Review of Library Discovery Layers

    ERIC Educational Resources Information Center

    Bossaller, Jenny S.; Sandy, Heather Moulaison

    2017-01-01

    This article describes the results of a systematic review of peer-reviewed, published research articles about "discovery layers," user-friendly interfaces or systems that provide single-search box access to library content. Focusing on articles in LISTA published 2009-2013, a set of 80 articles was coded for community of users, journal…

  11. QR Codes in the Library: "It's Not Your Mother's Barcode!"

    ERIC Educational Resources Information Center

    Dobbs, Cheri

    2011-01-01

    Barcode scanning has become more than just fun. Now libraries and businesses are leveraging barcode technology as an innovative tool to market their products and ideas. Developed and popularized in Japan, these Quick Response (QR) or two-dimensional barcodes allow marketers to provide interactive content in an otherwise static environment. In this…

  12. SP_Ace: a new code to derive stellar parameters and elemental abundances

    NASA Astrophysics Data System (ADS)

    Boeche, C.; Grebel, E. K.

    2016-03-01

    Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters. A simple Web front end of SP_Ace can be found at http://dc.g-vo.org/SP_ACE while the source code will be published soon. Full Tables D.1-D.3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A2

  13. INTRIGOSS: A new Library of High Resolution Synthetic Spectra

    NASA Astrophysics Data System (ADS)

    Franchini, Mariagrazia; Morossi, Carlo; Di Marcancantonio, Paolo; Chavez, Miguel; GES-Builders

    2018-01-01

    INTRIGOSS (INaf Trieste Grid Of Synthetic Spectra) is a new High Resolution (HiRes) synthetic spectral library designed for studying F, G, and K stars. The library is based on atmosphere models computed with specified individual element abundances via ATLAS12 code. Normalized SPectra (NSP) and surface Flux SPectra (FSP), in the 4800-5400 Å wavelength range, were computed by means of the SPECTRUM code. The synthetic spectra are computed with an atomic and bi-atomic molecular line list including "bona fide" Predicted Lines (PLs) built by tuning loggf to reproduce very high SNR Solar spectrum and the UVES-U580 spectra of five cool giants extracted from the Gaia-ESO survey (GES). The astrophysical gf-values were then assessed by using more than 2000 stars with homogenous and accurate atmosphere parameters and detailed chemical composition from GES. The validity and greater accuracy of INTRIGOSS NSPs and FSPs with respect to other available spectral libraries is discussed. INTRIGOSS will be available on the web and will be a valuable tool for both stellar atmospheric parameters and stellar population studies.

  14. EnviroDIY ModularSensors: A Library to give Environmental Sensors a Common Interface of Functions for use with Arduino-Compatible Dataloggers

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Damiano, S. G.; Hicks, S.; Horsburgh, J. S.

    2017-12-01

    EnviroDIY is a community for do-it-yourself environmental science and monitoring (https://envirodiy.org), largely focused on sharing ideas for developing Arduino-compatible open-source sensor stations, similar to the EnviroDIY Mayfly datalogger (http://envirodiy.org/mayfly/). Here we present the ModularSensors Arduino code library (https://github.com/EnviroDIY/ModularSensors), deisigned to give all sensors and variables a common interface of functions and returns and to make it very easy to iterate through and log data from many sensors and variables. This library was written primarily for the EnviroDIY Mayfly, but we have begun to test it on other Arduino based boards. We will show the large number of developed sensor interfaces, and examples of using this library code to stream near real time data to the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a data and software system based on the Observations Data Model v2 (http://www.odm2.org).

  15. Building Codes and Regulations.

    ERIC Educational Resources Information Center

    Fisher, John L.

    The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)

  16. Programming with BIG data in R: Scaling analytics from one to thousands of nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Drew; Chen, Wei -Chen; Matheson, Michael A.

    Here, we present a tutorial overview showing how one can achieve scalable performance with R. We do so by utilizing several package extensions, including those from the pbdR project. These packages consist of high performance, high-level interfaces to and extensions of MPI, PBLAS, ScaLAPACK, I/O libraries, profiling libraries, and more. While these libraries shine brightest on large distributed platforms, they also work rather well on small clusters and often, surprisingly, even on a laptop with only two cores. Our tutorial begins with recommendations on how to get more performance out of your R code before considering parallel implementations. Because Rmore » is a high-level language, a function can have a deep hierarchy of operations. For big data, this can easily lead to inefficiency. Profiling is an important tool to understand the performance of an R code for both serial and parallel improvements.« less

  17. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  18. Programming with BIG data in R: Scaling analytics from one to thousands of nodes

    DOE PAGES

    Schmidt, Drew; Chen, Wei -Chen; Matheson, Michael A.; ...

    2016-11-09

    Here, we present a tutorial overview showing how one can achieve scalable performance with R. We do so by utilizing several package extensions, including those from the pbdR project. These packages consist of high performance, high-level interfaces to and extensions of MPI, PBLAS, ScaLAPACK, I/O libraries, profiling libraries, and more. While these libraries shine brightest on large distributed platforms, they also work rather well on small clusters and often, surprisingly, even on a laptop with only two cores. Our tutorial begins with recommendations on how to get more performance out of your R code before considering parallel implementations. Because Rmore » is a high-level language, a function can have a deep hierarchy of operations. For big data, this can easily lead to inefficiency. Profiling is an important tool to understand the performance of an R code for both serial and parallel improvements.« less

  19. Sensitivity analysis of Monju using ERANOS with JENDL-4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less

  20. pySecDec: A toolbox for the numerical evaluation of multi-scale integrals

    NASA Astrophysics Data System (ADS)

    Borowka, S.; Heinrich, G.; Jahn, S.; Jones, S. P.; Kerner, M.; Schlenk, J.; Zirke, T.

    2018-01-01

    We present pySECDEC, a new version of the program SECDEC, which performs the factorization of dimensionally regulated poles in parametric integrals, and the subsequent numerical evaluation of the finite coefficients. The algebraic part of the program is now written in the form of python modules, which allow a very flexible usage. The optimization of the C++ code, generated using FORM, is improved, leading to a faster numerical convergence. The new version also creates a library of the integrand functions, such that it can be linked to user-specific codes for the evaluation of matrix elements in a way similar to analytic integral libraries.

  1. Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model

    NASA Astrophysics Data System (ADS)

    Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.

    2003-06-01

    The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.

  2. ProteoWizard: open source software for rapid proteomics tools development.

    PubMed

    Kessner, Darren; Chambers, Matt; Burke, Robert; Agus, David; Mallick, Parag

    2008-11-01

    The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged.

  3. Trilinos I/O Support (Trios)

    DOE PAGES

    Oldfield, Ron A.; Sjaardema, Gregory D.; Lofstead II, Gerald F.; ...

    2012-01-01

    Trilinos I/O Support (Trios) is a new capability area in Trilinos that serves two important roles: (1) it provides and supports I/O libraries used by in-production scientific codes; (2) it provides a research vehicle for the evaluation and distribution of new techniques to improve I/O on advanced platforms. This paper provides a brief overview of the production-grade I/O libraries in Trios as well as some of the ongoing research efforts that contribute to the experimental libraries in Trios.

  4. Rambrain - a library for virtually extending physical memory

    NASA Astrophysics Data System (ADS)

    Imgrund, Maximilian; Arth, Alexander

    2017-08-01

    We introduce Rambrain, a user space library that manages memory consumption of your code. Using Rambrain you can overcommit memory over the size of physical memory present in the system. Rambrain takes care of temporarily swapping out data to disk and can handle multiples of the physical memory size present. Rambrain is thread-safe, OpenMP and MPI compatible and supports Asynchronous IO. The library was designed to require minimal changes to existing programs and to be easy to use.

  5. Status of the Monte Carlo library least-squares (MCLLS) approach for non-linear radiation analyzer problems

    NASA Astrophysics Data System (ADS)

    Gardner, Robin P.; Xu, Libai

    2009-10-01

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.

  6. ATLAS offline software performance monitoring and optimization

    NASA Astrophysics Data System (ADS)

    Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration

    2014-06-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.

  7. NA-42 TI Shared Software Component Library FY2011 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knudson, Christa K.; Rutz, Frederick C.; Dorow, Kevin E.

    The NA-42 TI program initiated an effort in FY2010 to standardize its software development efforts with the long term goal of migrating toward a software management approach that will allow for the sharing and reuse of code developed within the TI program, improve integration, ensure a level of software documentation, and reduce development costs. The Pacific Northwest National Laboratory (PNNL) has been tasked with two activities that support this mission. PNNL has been tasked with the identification, selection, and implementation of a Shared Software Component Library. The intent of the library is to provide a common repository that is accessiblemore » by all authorized NA-42 software development teams. The repository facilitates software reuse through a searchable and easy to use web based interface. As software is submitted to the repository, the component registration process captures meta-data and provides version control for compiled libraries, documentation, and source code. This meta-data is then available for retrieval and review as part of library search results. In FY2010, PNNL and staff from the Remote Sensing Laboratory (RSL) teamed up to develop a software application with the goal of replacing the aging Aerial Measuring System (AMS). The application under development includes an Advanced Visualization and Integration of Data (AVID) framework and associated AMS modules. Throughout development, PNNL and RSL have utilized a common AMS code repository for collaborative code development. The AMS repository is hosted by PNNL, is restricted to the project development team, is accessed via two different geographic locations and continues to be used. The knowledge gained from the collaboration and hosting of this repository in conjunction with PNNL software development and systems engineering capabilities were used in the selection of a package to be used in the implementation of the software component library on behalf of NA-42 TI. The second task managed by PNNL is the development and continued maintenance of the NA-42 TI Software Development Questionnaire. This questionnaire is intended to help software development teams working under NA-42 TI in documenting their development activities. When sufficiently completed, the questionnaire illustrates that the software development activities recorded incorporate significant aspects of the software engineering lifecycle. The questionnaire template is updated as comments are received from NA-42 and/or its development teams and revised versions distributed to those using the questionnaire. PNNL also maintains a list of questionnaire recipients. The blank questionnaire template, the AVID and AMS software being developed, and the completed AVID AMS specific questionnaire are being used as the initial content to be established in the TI Component Library. This report summarizes the approach taken to identify requirements, search for and evaluate technologies, and the approach taken for installation of the software needed to host the component library. Additionally, it defines the process by which users request access for the contribution and retrieval of library content.« less

  8. Community-led cancer action councils in Queens, New York: process evaluation of an innovative partnership with the Queens library system.

    PubMed

    Basu Roy, Upal; Michel, Tamara; Carpenter, Alison; Lounsbury, David W; Sabino, Eilleen; Stevenson, Alexis Jurow; Combs, Sarah; Jacobs, Jasmine; Padgett, Deborah; Rapkin, Bruce D

    2014-02-06

    Community-based participatory research (CBPR) has great potential to address cancer disparities, particularly in racially and ethnically diverse and underserved neighborhoods. The objective of this study was to conduct a process evaluation of an innovative academic-community partnership, Queens Library HealthLink, which aimed to reduce cancer disparities through neighborhood groups (Cancer Action Councils) that convened in public libraries in Queens, New York. We used a mixed-methods approach to conduct 69 telephone survey interviews and 4 focus groups (15 participants) with Cancer Action Council members. We used 4 performance criteria to inform data collection: action or attention to sustainability, library support for the council, social cohesion and group leadership, and activity level. Focus group transcripts were independently coded and cross-checked for consensus until saturation was achieved. Members reported benefits and barriers to participation. Thirty-three original focus group transcript codes were organized into 8 main themes related to member experiences: 1) library as a needed resource, 2) library as a reputable and nondenominational institution, 3) value of library staff, 4) need for a HealthLink specialist, 5) generation of ideas and coordination of tasks, 6) participation challenges, 7) use of community connections, and 8) collaboration for sustainability. In response to the process evaluation, Cancer Action Council members and HealthLink staff incorporated member suggestions to improve council sustainability. The councils merged to increase intercouncil collaboration, and institutional changes were made in funding to sustain a HealthLink specialist beyond the grant period.

  9. The Federal Depository Library Program in Transition: A Perspective at the Turn of a Century.

    ERIC Educational Resources Information Center

    O'Mahony, Daniel P.

    1998-01-01

    The legal framework covering government information procurement, production, and dissemination has been in place for over 100 years. Congress is currently developing revisions to the U.S. Code to reform this system. Fundamental principles of public access, embodied in the Federal Depository Library Program, must guide these revisions, and…

  10. Preservation and Access to Manuscript Collections of the Czech National Library.

    ERIC Educational Resources Information Center

    Karen, Vladimir; Psohlavec, Stanislav

    In 1996, the Czech National Library started a large-scale digitization of its extensive and invaluable collection of historical manuscripts and printed books. Each page of the selected documents is scanned using a high-resolution, full-color digital camera, processed, and archived on a CD-ROM disk. HTML coded description is added to the entire…

  11. The Whys and Hows of Certification. Public Librarian Certification Law.

    ERIC Educational Resources Information Center

    Wisconsin State Dept. of Public Instruction, Madison. Div. of Library Services.

    Under Wisconsin state law (Administrative Code P1-6.03) any librarian employed in a public library system or any municipal public library, except in a city of the first class, supported in whole or in part by public funds, must hold state certification. Qualifications are delineated for three grades of certification: grade 1, for public libraries…

  12. Efficiency Study of Implicit and Explicit Time Integration Operators for Finite Element Applications

    DTIC Science & Technology

    1977-07-01

    cffiAciency, wherein Beta =0 provides anl exp~licit algorithm, wvhile Beta &0 provides anl implicit algorithm. Both algorithmns arc used in the same...Hlueneme CA: CO, Code C44A Port j IHuenemne, CA NAVSEC Cod,. 6034 (Library), Washington DC NAVSI*CGRUAC’I’ PWO, ’rorri Sta, OkinawaI NAVSIIIPRBFTAC Library

  13. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  14. Three-Dimensional Audio Client Library

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2005-01-01

    The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.

  15. Reflections on Ethics in Practice

    ERIC Educational Resources Information Center

    Adams, Helen R.

    2009-01-01

    Each profession has its own code of ethics. The Merriam-Webster Online Dictionary (2008) defines professional ethics as "the principles of conduct governing an individual or a group." The Code of Ethics of the American Library Association (ALA Council 2008) has served librarians for seventy years and reflects the ideals toward which all librarians…

  16. Naval Law Review. Volume 48

    DTIC Science & Technology

    2001-01-01

    from airports and hotels, Internet cafes , libraries, and even cellular phones. This unmatched versatility has made e-mail the preferred method of...8217 contention, however, was not that the Franchise Tax Board applied the wrong section of the code; it was that the code “unfairly taxed the wife’s

  17. Selected DOE headquarters publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1979-04-01

    This publication provides listings of (mainly policy and programmatic) publications which have been issued by headquarters organizations of the Department of Energy; assigned a DOE/XXX- type report number code, where XXX is the 1- to 4-letter code for the issuing headquarters organization; received by the Energy Library; and made available to the public.

  18. Integrating digital topology in image-processing libraries.

    PubMed

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  19. Multigroup cross section library for GFR2400

    NASA Astrophysics Data System (ADS)

    Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír

    2017-09-01

    In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.

  20. VarPy: A python library for volcanology and rock physics data analysis

    NASA Astrophysics Data System (ADS)

    Filgueira, Rosa; Atkinson, Malcom; Bell, Andrew; Snelling, Brawen; Main, Ian

    2014-05-01

    The increasing prevalence of digital instrumentation in volcanology and rock physics is leading to a wealth of data, which in turn is increasing the need for computational analyses and models. Today, these are largely developed by each individual or researcher. The introduction of a shared library that can be used for this purpose has several benefits: 1. when an existing function in the library meets a need recognised by a researcher it is usually much less effort than developing ones own code; 2. once functions are established and multiply used they become better tested, more reliable and eventually trusted by the community; 3. use of the same functions by different researchers makes it easier to compare results and to compare the skill of rival analysis and modelling methods; and 4. in the longer term the cost of maintaining these functions is shared over a wide community and they therefore have greater duration. Python is a high-level interpreted programming language, with capabilities for object-oriented programming. Often scientists choose this language to program their programs because of the increased productivity it provides. Although, there are many software tools available for interactive data analysis and development, there are not libraries designed specifically for volcanology and rock physics data. Therefore, we propose a new Python open-source toolbox called "VarPy" to facilitate rapid application development for rock physicists and volcanologists, which allow users to define their own workflows to develop models, analyses and visualisations. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project and volcanic experiments from INVG observatory Etna and IGN observatory Hierro as a test cases. In EFFORT project we are developing a scientist gateway which offers services for collecting and sharing volcanology and rock physics data with the intent of stimulating sharing, collaboration and comparison of methods among the practitioners in the two fields. As such, it offers facilities for running analyses and models either under a researcher's control or periodically as part of an experiment and to compare the skills of predictive methods. The gateway therefore runs code on behalf of volcanology and rock physics researchers. Varpy library is intended to make it much easier for those researchers to set up the code they need to run. The library also makes it easier to arrange that code is in a form suitable for running in the EFFORT computational services. Care has been taken to ensure that the library can also be used outside of EFFORT systems, e.g., on a researcher's own laptop, providing two variants of the library: the gateway version and developer's version, with many of the functions completely identical. The library must fulfill two purposes simultaneously: • by providing a full repertoire of commonly required actions it must make it easy for volcanologist and rock physicists to write the python scripts they need to accomplish their work, and • by wrapping operations it must enable the EFFORT gateway to maintain the integrity of its data. Notice that proposal of VarPy library does not attempt to replace the functions provided by other libraries, such as NumpY and ScipY. VarPy is complementary to them.

  1. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  2. Sierra/Solid Mechanics 4.48 User's Guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose

    Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less

  3. EMPIRE: A code for nuclear astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palumbo, A.

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  4. Monitor Network Traffic with Packet Capture (pcap) on an Android Device

    DTIC Science & Technology

    2015-09-01

    administrative privileges . Under the current design Android development requirement, an Android Graphical User Interface (GUI) application cannot directly...build an Android application to monitor network traffic using open source packet capture (pcap) libraries. 15. SUBJECT TERMS ELIDe, Android , pcap 16...Building Application with Native Codes 5 8.1 Calling Native Codes Using JNI 5 8.2 Calling Native Codes from an Android Application 8 9. Retrieve Live

  5. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talou, P.; Chadwick, M. B.; Dietrich, F. S.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004.more » McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.« less

  6. Direct Reprogramming of Spiral Ganglion Non-neuronal Cells into Neurons: Toward Ameliorating Sensorineural Hearing Loss by Gene Therapy

    PubMed Central

    Noda, Teppei; Meas, Steven J.; Nogami, Jumpei; Amemiya, Yutaka; Uchi, Ryutaro; Ohkawa, Yasuyuki; Nishimura, Koji; Dabdoub, Alain

    2018-01-01

    Primary auditory neurons (PANs) play a critical role in hearing by transmitting sound information from the inner ear to the brain. Their progressive degeneration is associated with excessive noise, disease and aging. The loss of PANs leads to permanent hearing impairment since they are incapable of regenerating. Spiral ganglion non-neuronal cells (SGNNCs), comprised mainly of glia, are resident within the modiolus and continue to survive after PAN loss. These attributes make SGNNCs an excellent target for replacing damaged PANs through cellular reprogramming. We used the neurogenic pioneer transcription factor Ascl1 and the auditory neuron differentiation factor NeuroD1 to reprogram SGNNCs into induced neurons (iNs). The overexpression of both Ascl1 and NeuroD1 in vitro generated iNs at high efficiency. Transcriptome analyses revealed that iNs displayed a transcriptome profile resembling that of endogenous PANs, including expression of several key markers of neuronal identity: Tubb3, Map2, Prph, Snap25, and Prox1. Pathway analyses indicated that essential pathways in neuronal growth and maturation were activated in cells upon neuronal induction. Furthermore, iNs extended projections toward cochlear hair cells and cochlear nucleus neurons when cultured with each respective tissue. Taken together, our study demonstrates that PAN-like neurons can be generated from endogenous SGNNCs. This work suggests that gene therapy can be a viable strategy to treat sensorineural hearing loss caused by degeneration of PANs. PMID:29492404

  7. Autosophy information theory provides lossless data and video compression based on the data content

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.; Holtz, Eric S.; Holtz, Diana

    1996-09-01

    A new autosophy information theory provides an alternative to the classical Shannon information theory. Using the new theory in communication networks provides both a high degree of lossless compression and virtually unbreakable encryption codes for network security. The bandwidth in a conventional Shannon communication is determined only by the data volume and the hardware parameters, such as image size; resolution; or frame rates in television. The data content, or what is shown on the screen, is irrelevant. In contrast, the bandwidth in autosophy communication is determined only by data content, such as novelty and movement in television images. It is the data volume and hardware parameters that become irrelevant. Basically, the new communication methods use prior 'knowledge' of the data, stored in a library, to encode subsequent transmissions. The more 'knowledge' stored in the libraries, the higher the potential compression ratio. 'Information' is redefined as that which is not already known by the receiver. Everything already known is redundant and need not be re-transmitted. In a perfect communication each transmission code, called a 'tip,' creates a new 'engram' of knowledge in the library in which each tip transmission can represent any amount of data. Autosophy theories provide six separate learning modes, or omni dimensional networks, all of which can be used for data compression. The new information theory reveals the theoretical flaws of other data compression methods, including: the Huffman; Ziv Lempel; LZW codes and commercial compression codes such as V.42bis and MPEG-2.

  8. libvdwxc: a library for exchange-correlation functionals in the vdW-DF family

    NASA Astrophysics Data System (ADS)

    Hjorth Larsen, Ask; Kuisma, Mikael; Löfgren, Joakim; Pouillon, Yann; Erhart, Paul; Hyldgaard, Per

    2017-09-01

    We present libvdwxc, a general library for evaluating the energy and potential for the family of vdW-DF exchange-correlation functionals. libvdwxc is written in C and provides an efficient implementation of the vdW-DF method and can be interfaced with various general-purpose DFT codes. Currently, the Gpaw and Octopus codes implement interfaces to libvdwxc. The present implementation emphasizes scalability and parallel performance, and thereby enables ab initio calculations of nanometer-scale complexes. The numerical accuracy is benchmarked on the S22 test set whereas parallel performance is benchmarked on ligand-protected gold nanoparticles ({{Au}}144{({{SC}}11{{NH}}25)}60) up to 9696 atoms.

  9. Exact diagonalization library for quantum electron models

    NASA Astrophysics Data System (ADS)

    Iskakov, Sergei; Danilov, Michael

    2018-04-01

    We present an exact diagonalization C++ template library (EDLib) for solving quantum electron models, including the single-band finite Hubbard cluster and the multi-orbital impurity Anderson model. The observables that can be computed using EDLib are single particle Green's functions and spin-spin correlation functions. This code provides three different types of Hamiltonian matrix storage that can be chosen based on the model.

  10. A University Libraries Faculty Perspective on the Role of the Department Head in Faculty Performance: A Grounded Theory Approach. Revised.

    ERIC Educational Resources Information Center

    Boden, Dana W. R.

    This qualitative study examined the perceptions that university library faculty members hold regarding the role of the department head in promoting faculty growth and development. Four faculty members at the University of Nebraska-Lincoln were interviewed. Axial coding of the individuals' perceptions revealed six categories of perceived roles for…

  11. Pilot Study on the Prevalence of Imposed Queries in a School Library Media Center.

    ERIC Educational Resources Information Center

    Gross, Melissa

    1997-01-01

    Discussion of information-seeking behavior focuses on a study of the imposed query, as opposed to self-generated queries, in an elementary school library media center in order to quantify its presence, to record characteristics of the users that carry them, and to identify the persons imposing them. The coding sheet is appended. Contains one table…

  12. pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2014-01-01

    pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  14. A New Python Library for Spectroscopic Analysis with MIDAS Style

    NASA Astrophysics Data System (ADS)

    Song, Y.; Luo, A.; Zhao, Y.

    2013-10-01

    The ESO MIDAS is a system for astronomers to analyze data which many astronomers are using. Python is a high level script language and there are many applications for astronomical data process. We are releasing a new Python library which realizes some MIDAS commands in Python. People can use it to write a MIDAS style Python code. We call it PydasLib. It is a Python library based on ESO MIDAS functions, which is easily used by astronomers who are familiar with the usage of MIDAS.

  15. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  16. Code-Switching and Vernacular Support: An Early Middle English Case Study

    ERIC Educational Resources Information Center

    Skaffari, Janne

    2016-01-01

    In the multilingual history of England, the period following the Norman Conquest in 1066 is a particularly intriguing phase, but its code-switching patterns have so far received little attention. The present article describes and analyses the multilingual practices evinced in London, British Library, MS Stowe 34, containing one instructional prose…

  17. A Public-Use, Full-Screen Interface for SPIRES Databases.

    ERIC Educational Resources Information Center

    Kriz, Harry M.

    This paper describes the techniques for implementing a full-screen, custom SPIRES interface for a public-use library database. The database-independent protocol that controls the system is described in detail. Source code for an entire working application using this interface is included. The protocol, with less than 170 lines of procedural code,…

  18. ACDOS2: an improved neutron-induced dose rate code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagache, J.C.

    1981-06-01

    To calculate the expected dose rate from fusion reactors as a function of geometry, composition, and time after shutdown a computer code, ACDOS2, was written, which utilizes up-to-date libraries of cross-sections and radioisotope decay data. ACDOS2 is in ANSI FORTRAN IV, in order to make it readily adaptable elsewhere.

  19. Ethics Today: Are Our Principles Still Relevant?

    ERIC Educational Resources Information Center

    Garnar, Martin

    2015-01-01

    In 1939 technological advances included the first handheld electric slicing knife, the first mass-produced helicopter, and the first transmission of a picture via a cable system (Science and Technology 2001). That year also saw the first Code of Ethics adopted by the American Library Association (ALA OIF 2010, 311). Can an ethical code first…

  20. Command History for 1991 (Naval Personnel Research and Development Center)

    DTIC Science & Technology

    1992-08-01

    years of age. 18 Chronology of 1991Events New Emtlovee Lieutenant Rolando Lim Code 151 Regina G. Bragg Paul J. Carney Library Technician Supply Clerk Code...Awards 35 Years Ben Garcia Gene Stout 30 Years Jim Julius Ramona Mouzon Hal Rosen 32 25 Years Jim Chadbourne Bob Harris Dorothy Martin Jan Reynolds 20

  1. RNA-seq reveals distinctive RNA profiles of small extracellular vesicles from different human liver cancer cell lines.

    PubMed

    Berardocco, Martina; Radeghieri, Annalisa; Busatto, Sara; Gallorini, Marialucia; Raggi, Chiara; Gissi, Clarissa; D'Agnano, Igea; Bergese, Paolo; Felsani, Armando; Berardi, Anna C

    2017-10-10

    Liver cancer (LC) is one of the most common cancers and represents the third highest cause of cancer-related deaths worldwide. Extracellular vesicle (EVs) cargoes, which are selectively enriched in RNA, offer great promise for the diagnosis, prognosis and treatment of LC. Our study analyzed the RNA cargoes of EVs derived from 4 liver-cancer cell lines: HuH7, Hep3B, HepG2 (hepato-cellular carcinoma) and HuH6 (hepatoblastoma), generating two different sets of sequencing libraries for each. One library was size-selected for small RNAs and the other targeted the whole transcriptome. Here are reported genome wide data of the expression level of coding and non-coding transcripts, microRNAs, isomiRs and snoRNAs providing the first comprehensive overview of the extracellular-vesicle RNA cargo released from LC cell lines. The EV-RNA expression profiles of the four liver cancer cell lines share a similar background, but cell-specific features clearly emerge showing the marked heterogeneity of the EV-cargo among the individual cell lines, evident both for the coding and non-coding RNA species.

  2. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less

  3. Coupled Physics Environment (CouPE) library - Design, Implementation, and Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay S.

    Over several years, high fidelity, validated mono-­physics solvers with proven scalability on peta-­scale architectures have been developed independently. Based on a unified component-­based architecture, these existing codes can be coupled with a unified mesh-­data backplane and a flexible coupling-­strategy-­based driver suite to produce a viable tool for analysts. In this report, we present details on the design decisions and developments on CouPE, an acronym that stands for Coupled Physics Environment that orchestrates a coupled physics solver through the interfaces exposed by MOAB array-­based unstructured mesh, both of which are part of SIGMA (Scalable Interfaces for Geometry and Mesh-­Based Applications) toolkit.more » The SIGMA toolkit contains libraries that enable scalable geometry and unstructured mesh creation and handling in a memory and computationally efficient implementation. The CouPE version being prepared for a full open-­source release along with updated documentation will contain several useful examples that will enable users to start developing their applications natively using the native MOAB mesh and couple their models to existing physics applications to analyze and solve real world problems of interest. An integrated multi-­physics simulation capability for the design and analysis of current and future nuclear reactor models is also being investigated as part of the NEAMS RPL, to tightly couple neutron transport, thermal-­hydraulics and structural mechanics physics under the SHARP framework. This report summarizes the efforts that have been invested in CouPE to bring together several existing physics applications namely PROTEUS (neutron transport code), Nek5000 (computational fluid-dynamics code) and Diablo (structural mechanics code). The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The design of CouPE along with motivations that led to implementation choices are also discussed. The first release of the library will be different from the current version of the code that integrates the components in SHARP and explanation on the need for forking the source base will also be provided. Enhancements in the functionality and improved user guides will be available as part of the release. CouPE v0.1 is scheduled for an open-­source release in December 2014 along with SIGMA v1.1 components that provide support for language-agnostic mesh loading, traversal and query interfaces along with scalable solution transfer of fields between different physics codes. The coupling methodology and software interfaces of the library are presented, along with verification studies on two representative fast sodium-­cooled reactor demonstration problems to prove the usability of the CouPE library.« less

  4. A Combinatorial H4 Tail Library to Explore the Histone Code

    PubMed Central

    Garske, Adam L.; Craciun, Gheorghe; Denu, John M.

    2008-01-01

    Histone modifications modulate chromatin structure and function. A posttranslational modification-randomized, combinatorial library based on the first twenty-one residues of histone H4 was designed for systematic examination of proteins that interpret a histone code. The 800-member library represented all permutations of most known modifications within the N-terminal tail of histone H4. To determine its utility in a protein-binding assay, the on-bead library was screened with an antibody directed against phosphoserine 1 of H4. Among the hits, 59/60 sequences were phosphorylated at S1, while 30/30 of those selected from the non-hits were unphosphorylated. A 512-member version of the library was then used to determine the binding specificity of the double tudor domain of hJMJD2A, a histone demethylase involved in transcriptional repression. Global linear least squares fitting of modifications from the identified peptides (40 hits and 34 non-hits) indicated that methylation of K20 was the primary determinant for binding, but that phosphorylation/acetylation on neighboring sites attenuated the interaction. To validate the on-bead screen, isothermal titration calorimetry was performed with thirteen H4 peptides. Dissociation constants ranged from 1 mM - 1μM and corroborated the screening results. The general approach should be useful for probing the specificity of any histone-binding protein. PMID:18616348

  5. A Fixed Point VHDL Component Library for a High Efficiency Reconfigurable Radio Design Methodology

    NASA Technical Reports Server (NTRS)

    Hoy, Scott D.; Figueiredo, Marco A.

    2006-01-01

    Advances in Field Programmable Gate Array (FPGA) technologies enable the implementation of reconfigurable radio systems for both ground and space applications. The development of such systems challenges the current design paradigms and requires more robust design techniques to meet the increased system complexity. Among these techniques is the development of component libraries to reduce design cycle time and to improve design verification, consequently increasing the overall efficiency of the project development process while increasing design success rates and reducing engineering costs. This paper describes the reconfigurable radio component library developed at the Software Defined Radio Applications Research Center (SARC) at Goddard Space Flight Center (GSFC) Microwave and Communications Branch (Code 567). The library is a set of fixed-point VHDL components that link the Digital Signal Processing (DSP) simulation environment with the FPGA design tools. This provides a direct synthesis path based on the latest developments of the VHDL tools as proposed by the BEE VBDL 2004 which allows for the simulation and synthesis of fixed-point math operations while maintaining bit and cycle accuracy. The VHDL Fixed Point Reconfigurable Radio Component library does not require the use of the FPGA vendor specific automatic component generators and provide a generic path from high level DSP simulations implemented in Mathworks Simulink to any FPGA device. The access to the component synthesizable, source code provides full design verification capability:

  6. OPAL: An Open-Source MPI-IO Library over Cray XT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Weikuan; Vetter, Jeffrey S; Canon, Richard Shane

    Parallel IO over Cray XT is supported by a vendor-supplied MPI-IO package. This package contains a proprietary ADIO implementation built on top of the sysio library. While it is reasonable to maintain a stable code base for application scientists' convenience, it is also very important to the system developers and researchers to analyze and assess the effectiveness of parallel IO software, and accordingly, tune and optimize the MPI-IO implementation. A proprietary parallel IO code base relinquishes such flexibilities. On the other hand, a generic UFS-based MPI-IO implementation is typically used on many Linux-based platforms. We have developed an open-source MPI-IOmore » package over Lustre, referred to as OPAL (OPportunistic and Adaptive MPI-IO Library over Lustre). OPAL provides a single source-code base for MPI-IO over Lustre on Cray XT and Linux platforms. Compared to Cray implementation, OPAL provides a number of good features, including arbitrary specification of striping patterns and Lustre-stripe aligned file domain partitioning. This paper presents the performance comparisons between OPAL and Cray's proprietary implementation. Our evaluation demonstrates that OPAL achieves the performance comparable to the Cray implementation. We also exemplify the benefits of an open source package in revealing the underpinning of the parallel IO performance.« less

  7. Uncertainty analysis on reactivity and discharged inventory for a pressurized water reactor fuel assembly due to {sup 235,238}U nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Da Cruz, D. F.; Rochman, D.; Koning, A. J.

    2012-07-01

    This paper discusses the uncertainty analysis on reactivity and inventory for a typical PWR fuel element as a result of uncertainties in {sup 235,238}U nuclear data. A typical Westinghouse 3-loop fuel assembly fuelled with UO{sub 2} fuel with 4.8% enrichment has been selected. The Total Monte-Carlo method has been applied using the deterministic transport code DRAGON. This code allows the generation of the few-groups nuclear data libraries by directly using data contained in the nuclear data evaluation files. The nuclear data used in this study is from the JEFF3.1 evaluation, and the nuclear data files for {sup 238}U and {supmore » 235}U (randomized for the generation of the various DRAGON libraries) are taken from the nuclear data library TENDL. The total uncertainty (obtained by randomizing all {sup 238}U and {sup 235}U nuclear data in the ENDF files) on the reactor parameters has been split into different components (different nuclear reaction channels). Results show that the TMC method in combination with a deterministic transport code constitutes a powerful tool for performing uncertainty and sensitivity analysis of reactor physics parameters. (authors)« less

  8. GALARIO: a GPU accelerated library for analysing radio interferometer observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2018-06-01

    We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like Atacama Large Millimeter and sub-millimeter Array or the Karl G. Jansky Very Large Array. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard PYTHON and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use, and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with PYTHON bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high-resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at http://github.com/mtazzari/galario under the open source GNU Lesser General Public License v3.

  9. Genome re-annotation of the wild strawberry Fragaria vesca using extensive Illumina- and SMRT-based RNA-seq datasets

    PubMed Central

    Li, Yongping; Wei, Wei; Feng, Jia; Luo, Huifeng; Pi, Mengting; Liu, Zhongchi; Kang, Chunying

    2018-01-01

    Abstract The genome of the wild diploid strawberry species Fragaria vesca, an ideal model system of cultivated strawberry (Fragaria × ananassa, octoploid) and other Rosaceae family crops, was first published in 2011 and followed by a new assembly (Fvb). However, the annotation for Fvb mainly relied on ab initio predictions and included only predicted coding sequences, therefore an improved annotation is highly desirable. Here, a new annotation version named v2.0.a2 was created for the Fvb genome by a pipeline utilizing one PacBio library, 90 Illumina RNA-seq libraries, and 9 small RNA-seq libraries. Altogether, 18,641 genes (55.6% out of 33,538 genes) were augmented with information on the 5′ and/or 3′ UTRs, 13,168 (39.3%) protein-coding genes were modified or newly identified, and 7,370 genes were found to possess alternative isoforms. In addition, 1,938 long non-coding RNAs, 171 miRNAs, and 51,714 small RNA clusters were integrated into the annotation. This new annotation of F. vesca is substantially improved in both accuracy and integrity of gene predictions, beneficial to the gene functional studies in strawberry and to the comparative genomic analysis of other horticultural crops in Rosaceae family. PMID:29036429

  10. Climate tools in mainstream Linux distributions

    NASA Astrophysics Data System (ADS)

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  11. Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.

    2000-01-01

    Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.

  12. Direct Identification of On-Bead Peptides Using Surface-Enhanced Raman Spectroscopic Barcoding System for High-Throughput Bioanalysis

    PubMed Central

    Kang, Homan; Jeong, Sinyoung; Koh, Yul; Geun Cha, Myeong; Yang, Jin-Kyoung; Kyeong, San; Kim, Jaehi; Kwak, Seon-Yeong; Chang, Hye-Jin; Lee, Hyunmi; Jeong, Cheolhwan; Kim, Jong-Ho; Jun, Bong-Hyun; Kim, Yong-Kweon; Hong Jeong, Dae; Lee, Yoon-Sik

    2015-01-01

    Recently, preparation and screening of compound libraries remain one of the most challenging tasks in drug discovery, biomarker detection, and biomolecular profiling processes. So far, several distinct encoding/decoding methods such as chemical encoding, graphical encoding, and optical encoding have been reported to identify those libraries. In this paper, a simple and efficient surface-enhanced Raman spectroscopic (SERS) barcoding method using highly sensitive SERS nanoparticles (SERS ID) is presented. The 44 kinds of SERS IDs were able to generate simple codes and could possibly generate more than one million kinds of codes by incorporating combinations of different SERS IDs. The barcoding method exhibited high stability and reliability under bioassay conditions. The SERS ID encoding based screening platform can identify the peptide ligand on the bead and also quantify its binding affinity for specific protein. We believe that our SERS barcoding technology is a promising method in the screening of one-bead-one-compound (OBOC) libraries for drug discovery. PMID:26017924

  13. Direct identification of on-bead peptides using surface-enhanced Raman spectroscopic barcoding system for high-throughput bioanalysis.

    PubMed

    Kang, Homan; Jeong, Sinyoung; Koh, Yul; Geun Cha, Myeong; Yang, Jin-Kyoung; Kyeong, San; Kim, Jaehi; Kwak, Seon-Yeong; Chang, Hye-Jin; Lee, Hyunmi; Jeong, Cheolhwan; Kim, Jong-Ho; Jun, Bong-Hyun; Kim, Yong-Kweon; Hong Jeong, Dae; Lee, Yoon-Sik

    2015-05-28

    Recently, preparation and screening of compound libraries remain one of the most challenging tasks in drug discovery, biomarker detection, and biomolecular profiling processes. So far, several distinct encoding/decoding methods such as chemical encoding, graphical encoding, and optical encoding have been reported to identify those libraries. In this paper, a simple and efficient surface-enhanced Raman spectroscopic (SERS) barcoding method using highly sensitive SERS nanoparticles (SERS ID) is presented. The 44 kinds of SERS IDs were able to generate simple codes and could possibly generate more than one million kinds of codes by incorporating combinations of different SERS IDs. The barcoding method exhibited high stability and reliability under bioassay conditions. The SERS ID encoding based screening platform can identify the peptide ligand on the bead and also quantify its binding affinity for specific protein. We believe that our SERS barcoding technology is a promising method in the screening of one-bead-one-compound (OBOC) libraries for drug discovery.

  14. Assessment of the Draft AIAA S-119 Flight Dynamic Model Exchange Standard

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Murri, Daniel G.; Hill, Melissa A.; Jessick, Matthew V.; Penn, John M.; Hasan, David A.; Crues, Edwin Z.; Falck, Robert D.; McCarthy, Thomas G.; Vuong, Nghia; hide

    2011-01-01

    An assessment of a draft AIAA standard for flight dynamics model exchange, ANSI/AIAA S-119-2011, was conducted on behalf of NASA by a team from the NASA Engineering and Safety Center. The assessment included adding the capability of importing standard models into real-time simulation facilities at several NASA Centers as well as into analysis simulation tools. All participants were successful at importing two example models into their respective simulation frameworks by using existing software libraries or by writing new import tools. Deficiencies in the libraries and format documentation were identified and fixed; suggestions for improvements to the standard were provided to the AIAA. An innovative tool to generate C code directly from such a model was developed. Performance of the software libraries compared favorably with compiled code. As a result of this assessment, several NASA Centers can now import standard models directly into their simulations. NASA is considering adopting the now-published S-119 standard as an internal recommended practice.

  15. DIAPHANE: A portable radiation transport library for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Reed, Darren S.; Dykes, Tim; Cabezón, Rubén; Gheller, Claudio; Mayer, Lucio

    2018-05-01

    One of the most computationally demanding aspects of the hydrodynamical modelingof Astrophysical phenomena is the transport of energy by radiation or relativistic particles. Physical processes involving energy transport are ubiquitous and of capital importance in many scenarios ranging from planet formation to cosmic structure evolution, including explosive events like core collapse supernova or gamma-ray bursts. Moreover, the ability to model and hence understand these processes has often been limited by the approximations and incompleteness in the treatment of radiation and relativistic particles. The DIAPHANE project has focused on developing a portable and scalable library that handles the transport of radiation and particles (in particular neutrinos) independently of the underlying hydrodynamic code. In this work, we present the computational framework and the functionalities of the first version of the DIAPHANE library, which has been successfully ported to three different smoothed-particle hydrodynamic codes, GADGET2, GASOLINE and SPHYNX. We also present validation of different modules solving the equations of radiation and neutrino transport using different numerical schemes.

  16. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em

    2015-09-01

    Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM).

  17. Multiscale Universal Interface: A concurrent framework for coupling heterogeneous solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Yu-Hang, E-mail: yuhang_tang@brown.edu; Kudo, Shuhei, E-mail: shuhei-kudo@outlook.jp; Bian, Xin, E-mail: xin_bian@brown.edu

    2015-09-15

    Graphical abstract: - Abstract: Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling multiscale phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. In this paper we present a C++ library, i.e. the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of multiscale simulations. The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create anmore » easily customizable framework for solver-independent data interpretation. The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers' own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility. We validated the library by solving three different multiscale problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM)« less

  18. AQUATOX Frequently Asked Questions

    EPA Pesticide Factsheets

    Capabilities, Installation, Source Code, Example Study Files, Biotic State Variables, Initial Conditions, Loadings, Volume, Sediments, Parameters, Libraries, Ecotoxicology, Waterbodies, Link to Watershed Models, Output, Metals, Troubleshooting

  19. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  20. In search of an ethic of medical librarianship.

    PubMed Central

    Crawford, H

    1978-01-01

    Why is the literature on the ethics of librarianship so sparse? Some of the codes of ethics proposed or officially adopted during this century are examined, with an informal commentary on the reasons why they seem to have aroused so little sustained interest and discussion. Attention is directed particularly to library--user relationships and to some of the unique ethical situations in medical libraries. PMID:678701

  1. The Plotting Library http://astroplotlib.stsci.edu

    NASA Astrophysics Data System (ADS)

    Úbeda, L.

    2014-05-01

    astroplotlib is a multi-language astronomical library of plots. It is a collection of software templates that are useful to create paper-quality figures. All current templates are coded in IDL, some in Python and Mathematica. This free resource supported at Space Telescope Science Institute allows users to download any plot and customize it to their own needs. It is also intended as an educational tool.

  2. Real-Time Ada Problem Study

    DTIC Science & Technology

    1989-03-24

    Specified Test Verification Matri_ .. 39 3.2.6.5 Test Generation Assistance. .............. . .. ......... 40 3.2.7 Maintenance...lack of intimate knowledge of how the runtime links to the compiler generated code. Furthermore, the runime must meet a rigorous set of tests to insure...projects, and is not provided. Along with the library, a set of tests should be provided to verify the accuracy of the library after changes have been

  3. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  4. Towards efficient data exchange and sharing for big-data driven materials science: metadata and data formats

    NASA Astrophysics Data System (ADS)

    Ghiringhelli, Luca M.; Carbogno, Christian; Levchenko, Sergey; Mohamed, Fawzi; Huhs, Georg; Lüders, Martin; Oliveira, Micael; Scheffler, Matthias

    2017-11-01

    With big-data driven materials research, the new paradigm of materials science, sharing and wide accessibility of data are becoming crucial aspects. Obviously, a prerequisite for data exchange and big-data analytics is standardization, which means using consistent and unique conventions for, e.g., units, zero base lines, and file formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community, which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops "converters" for the input and output files of all important codes. These converters then translate the data of each code into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. In this perspective paper, we present both strategies and argue that they can and should be regarded as complementary, if not even synergetic. The represented appropriate format and conventions were agreed upon by two teams, the Electronic Structure Library (ESL) of the European Center for Atomic and Molecular Computations (CECAM) and the NOvel MAterials Discovery (NOMAD) Laboratory, a European Centre of Excellence (CoE). A key element of this work is the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.

  5. QR Codes: Taking Collections Further

    ERIC Educational Resources Information Center

    Ahearn, Caitlin

    2014-01-01

    With some thought and direction, QR (quick response) codes are a great tool to use in school libraries to enhance access to information. From March through April 2013, Caitlin Ahearn interned at Sanborn Regional High School (SRHS) under the supervision of Pam Harland. As a result of Harland's un-Deweying of the nonfiction collection at SRHS,…

  6. ENDF-6 Formats Manual Data Formats and Procedures for the Evaluated Nuclear Data File ENDF/B-VI and ENDF/B-VII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Members of the Cross Sections Evaluation Working Group

    2009-06-01

    In December 2006, the Cross Section Evaluation Working Group (CSEWG) of the United States released the new ENDF/B-VII.0 library. This represented considerable achievement as it was the 1st major release since 1990 when ENDF/B-VI has been made publicly available. The two libraries have been released in the same format, ENDF-6, which has been originally developed for the ENDF/B-VI library. In the early stage of work on the VII-th generation of the library CSEWG made important decision to use the same formats. This decision was adopted even though it was argued that it would be timely to modernize the formats andmore » several interesting ideas were proposed. After careful deliberation CSEWG concluded that actual implementation would require considerable resources needed to modify processing codes and to guarantee high quality of the files processed by these codes. In view of this the idea of format modernization has been postponed and ENDF-6 format was adopted for the new ENDF/B-VII library. In several other areas related to ENDF we made our best to move beyond established tradition and achieve maximum modernization. Thus, the 'Big Paper' on ENDF/B-VII.0 has been published, also in December 2006, as the Special Issue of Nuclear Data Sheets 107 (1996) 2931-3060. The new web retrieval and plotting system for ENDF-6 formatted data, Sigma, was developed by the NNDC and released in 2007. Extensive paper has been published on the advanced tool for nuclear reaction data evaluation, EMPIRE, in 2007. This effort was complemented with release of updated set of ENDF checking codes in 2009. As the final item on this list, major revision of ENDF-6 Formats Manual was made. This work started in 2006 and came to fruition in 2009 as documented in the present report.« less

  7. TNSPackage: A Fortran2003 library designed for tensor network state methods

    NASA Astrophysics Data System (ADS)

    Dong, Shao-Jun; Liu, Wen-Yuan; Wang, Chao; Han, Yongjian; Guo, G.-C.; He, Lixin

    2018-07-01

    Recently, the tensor network states (TNS) methods have proven to be very powerful tools to investigate the strongly correlated many-particle physics in one and two dimensions. The implementation of TNS methods depends heavily on the operations of tensors, including contraction, permutation, reshaping tensors, SVD and so on. Unfortunately, the most popular computer languages for scientific computation, such as Fortran and C/C++ do not have a standard library for such operations, and therefore make the coding of TNS very tedious. We develop a Fortran2003 package that includes all kinds of basic tensor operations designed for TNS. It is user-friendly and flexible for different forms of TNS, and therefore greatly simplifies the coding work for the TNS methods.

  8. Gene discovery in Eimeria tenella by immunoscreening cDNA expression libraries of sporozoites and schizonts with chicken intestinal antibodies.

    PubMed

    Réfega, Susana; Girard-Misguich, Fabienne; Bourdieu, Christiane; Péry, Pierre; Labbé, Marie

    2003-04-02

    Specific antibodies were produced ex vivo from intestinal culture of Eimeria tenella infected chickens. The specificity of these intestinal antibodies was tested against different parasite stages. These antibodies were used to immunoscreen first generation schizont and sporozoite cDNA libraries permitting the identification of new E. tenella antigens. We obtained a total of 119 cDNA clones which were subjected to sequence analysis. The sequences coding for the proteins inducing local immune responses were compared with nucleotide or protein databases and with expressed sequence tags (ESTs) databases. We identified new Eimeria genes coding for heat shock proteins, a ribosomal protein, a pyruvate kinase and a pyridoxine kinase. Specific features of other sequences are discussed.

  9. High-Performance Design Patterns for Modern Fortran

    DOE PAGES

    Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...

    2015-01-01

    This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less

  10. PuLSE: Quality control and quantification of peptide sequences explored by phage display libraries.

    PubMed

    Shave, Steven; Mann, Stefan; Koszela, Joanna; Kerr, Alastair; Auer, Manfred

    2018-01-01

    The design of highly diverse phage display libraries is based on assumption that DNA bases are incorporated at similar rates within the randomized sequence. As library complexity increases and expected copy numbers of unique sequences decrease, the exploration of library space becomes sparser and the presence of truly random sequences becomes critical. We present the program PuLSE (Phage Library Sequence Evaluation) as a tool for assessing randomness and therefore diversity of phage display libraries. PuLSE runs on a collection of sequence reads in the fastq file format and generates tables profiling the library in terms of unique DNA sequence counts and positions, translated peptide sequences, and normalized 'expected' occurrences from base to residue codon frequencies. The output allows at-a-glance quantitative quality control of a phage library in terms of sequence coverage both at the DNA base and translated protein residue level, which has been missing from toolsets and literature. The open source program PuLSE is available in two formats, a C++ source code package for compilation and integration into existing bioinformatics pipelines and precompiled binaries for ease of use.

  11. Program/project management resource lists

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Program/Project Management Collection at NASA Headquarters Library is part of a larger initiative by the Training and Development Division, Code FT, NASA Headquarters. The collection is being developed to support the Program/Project Management Initiative which includes the training of NASA managers. These PPM Resource Lists have proven to be a useful method of informing NASA employees nationwide about the subject coverage of the library collection. All resources included on the lists are available at or through NASA Headquarters Library. NASA employees at other Centers may request listed books through interlibrary loan, and listed articles by contacting me by phone, mail, or e-mail.

  12. Generation of human Fab antibody libraries: PCR amplification and assembly of light- and heavy-chain coding sequences.

    PubMed

    Andris-Widhopf, Jennifer; Steinberger, Peter; Fuller, Roberta; Rader, Christoph; Barbas, Carlos F

    2011-09-01

    The development of therapeutic antibodies for use in the treatment of human diseases has long been a goal for many researchers in the antibody field. One way to obtain these antibodies is through phage-display libraries constructed from human lymphocytes. This protocol describes the construction of human Fab (fragment antigen binding) antibody libraries. In this method, the individual rearranged heavy- and light-chain variable regions are amplified separately and are linked through a series of overlap polymerase chain reaction (PCR) steps to give the final Fab products that are used for cloning.

  13. Library workers' personal beliefs about childhood vaccination and vaccination information provision.

    PubMed

    Keselman, Alla; Smith, Catherine Arnott; Hundal, Savreen

    2014-07-01

    This is a report on the impact of library workers' personal beliefs on provision of vaccination information. Nine public librarians were interviewed about a hypothetical scenario involving a patron who is concerned about possible vaccination-autism connections. The analysis employed thematic coding. Results suggested that while most participants supported childhood vaccination, tension between their personal views and neutrality impacted their ability to conduct the interaction. The neutrality stance, though consonant with professional guidelines, curtails librarians' ability to provide accurate health information. Outreach and communication between public and health sciences libraries can help librarians provide resources to address health controversies.

  14. Fission yield covariances for JEFF: A Bayesian Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Leray, Olivier; Rochman, Dimitri; Fleming, Michael; Sublet, Jean-Christophe; Koning, Arjan; Vasiliev, Alexander; Ferroukhi, Hakim

    2017-09-01

    The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties) and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.

  15. MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1994-01-01

    MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.

  16. A computer program for processing impedance cardiographic data: Improving accuracy through user-interactive software

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet

    1988-01-01

    This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.

  17. A morphological comparison of the extraforaminal ligament between the cervical and thoracic regions.

    PubMed

    Nonthasaen, Pawaree; Nasu, Hisayo; Kagawa, Eiichiro; Akita, Keiichi

    2018-05-01

    The current study was conducted to clarify the morphology of the extraforaminal ligament (EFL) at the cervicothoracic junction and to compare the attachment of the EFL and the positional relation between the EFL and the spinal nerves, additionally to clarify the details within the connecting bundles at the cervicothoracic junction. The EFLs from the 4th cervical to the 4th thoracic vertebrae were dissected in 56 sides of 28 Japanese cadavers (11 males, 17 females). The range of age was 62.0-99.0 years. In addition, connecting bundles were analyzed by histological examination. Ventral to the spinal nerve, the capsulotransverse ligament (CTL), transforaminal ligament (TFL) and the ligament between the 7th cervical vertebra and the 1st rib were attached to the transverse process and rib. The EFL ventral to the 1st thoracic nerve was not observed in all sides. Dorsal to the spinal nerve, the anterior part of the superior costotransverse ligament (ASCL) and the ligament homologous to the ASCL were attached to the transverse process and rib. The superior radiating ligament (SRL) and the ligament homologous to the SRL were identified. The connecting bundles identified between the 7th cervical and the 1st thoracic nerve were histologically confirmed to consist of nerves and vessels. The EFLs at the cervicothoracic junction were found to be homologous. The connecting bundles were observed between the 7th cervical and the 1st thoracic nerve. Interestingly, the 1st thoracic level alone might be a unique level at the cervicothoracic junction.

  18. Nestin is essential for zebrafish brain and eye development through control of progenitor cell apoptosis.

    PubMed

    Chen, Hua-Ling; Yuh, Chiou-Hwa; Wu, Kenneth K

    2010-02-19

    Nestin is expressed in neural progenitor cells (NPC) of developing brain. Despite its wide use as an NPC marker, the function of nestin in embryo development is unclear. As nestin is conserved in zebrafish and its predicted sequence is clustered with the mammalian nestin orthologue, we used zebrafish as a model to investigate its role in embryogenesis. Injection of nestin morpholino (MO) into fertilized eggs induced time- and dose-dependent brain and eye developmental defects. Nestin morphants exhibited characteristic morphological changes including small head, small eyes and hydrocephalus. Histological examinations show reduced hind- and mid-brain size, dilated ventricle, poorly organized retina and underdeveloped lens. Injection of control nestin MO did not induce brain or eye changes. Nestin MO injection reduced expression of ascl1b (achaete-scute complex-like 1b), a marker of NPCs, without affecting its distribution. Nestin MO did not influence Elavl3/4 (Embryonic lethal, abnormal vision, Drosophila-like 3/4) (a neuronal marker), or otx2 (a midbrain neuronal marker), but severely perturbed cranial motor nerve development and axon distribution. To determine whether the developmental defects are due to excessive NPC apoptosis and/or reduced NPC proliferation, we analyzed apoptosis by TUNEL assay and acridine orange staining and proliferation by BrdU incorporation, pcna and mcm5 expressions. Excessive apoptosis was noted in hindbrain and midbrain cells. Apoptotic signals were colocalized with ascl1b. Proliferation markers were not significantly altered by nestin MO. These results suggest that nestin is essential for zebrafish brain and eye development probably through control of progenitor cell apoptosis.

  19. A resistive magnetohydrodynamics solver using modern C++ and the Boost library

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas

    2016-09-01

    In this paper we describe the implementation of our C++ resistive magnetohydrodynamics solver. The framework developed facilitates the separation of the code implementing the specific numerical method and the physical model from the handling of boundary conditions and the management of the computational domain. In particular, this will allow us to use finite difference stencils which are only defined in the interior of the domain (the boundary conditions are handled automatically). We will discuss this and other design considerations and their impact on performance in some detail. In addition, we provide a documentation of the code developed and demonstrate that a performance comparable to Fortran can be achieved, while still maintaining a maximum of code readability and extensibility. Catalogue identifier: AFAH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 592774 No. of bytes in distributed program, including test data, etc.: 43771395 Distribution format: tar.gz Programming language: C++03. Computer: PC, HPC systems. Operating system: POSIX compatible (extensively tested on various Linux systems). In fact only the timing class requires POSIX routines; all other parts of the program can be run on any system where a C++ compiler, Boost, CVODE, and an implementation of BLAS are available. RAM: Hundredths of Kilobytes to Gigabytes (depending on the problem size) Classification: 19.10, 4.3. External routines: Boost, CVODE, either a BLAS library or Intel MKL Nature of problem: An approximate solution to the equations of resistive magnetohydrodynamics for a given initial value and given boundary conditions is computed. Solution method: The discretization is performed using a finite difference approximation in space and the CVODE library in time (which employs a scheme based on the backward differentiation formulas). Restrictions: We consider the 2.5 dimensional case; that is, the magnetic field and the velocity field are three dimensional but all quantities depend only on x and y (but not z). Unusual features: We provide an implementation in C++ using the Boost library that combines high level techniques (which greatly increases code maintainability and extensibility) with performance that is comparable to Fortran implementations. Running time: From seconds to weeks (depending on the problem size).

  20. Accountable Information Flow for Java-Based Web Applications

    DTIC Science & Technology

    2010-01-01

    runtime library Swift server runtime Java servlet framework HTTP Web server Web browser Figure 2: The Swift architecture introduced an open-ended...On the server, the Java application code links against Swift’s server-side run-time library, which in turn sits on top of the standard Java servlet ...AFRL-RI-RS-TR-2010-9 Final Technical Report January 2010 ACCOUNTABLE INFORMATION FLOW FOR JAVA -BASED WEB APPLICATIONS

  1. Ultrasonic Imaging and Automated Flaw Detection System

    DTIC Science & Technology

    1986-03-01

    176 007 !----------------------------- DS 176 500 ------------------------- ! STEPPER MOOC TOR MAP 176 ~ ~ IGR 509------------------- I I28 * 4W...ATTN: SMCAR-CCB-R 2 -R (ELLEN FOGARTY) 1 -RA 1 -RM 1 -RP I -RT TECHNICAL LIBRARY 5 ATTN: SMCAR-CCB-TL TECHNICAL PUBLICATIONS & EDITING UNIT 2 ATTN...WEAPONS CTR ATTN: TECHNICAL LIBRARY CODE X212 DAIILGREN, VA 22448 ’.1 -_ NOTE: PLEASE NOTIFY COMMANDER, ARMAMENT RESEARCH AND DEVELOPMENT CENTER, US

  2. JSC document index

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The Johnson Space Center (JSC) document index is intended to provide a single source listing of all published JSC-numbered documents their authors, and the designated offices of prime responsibility (OPR's) by mail code at the time of publication. The index contains documents which have been received and processed by the JSC Technical Library as of January 13, 1988. Other JSC-numbered documents which are controlled but not available through the JSC Library are also listed.

  3. Optimizing exosomal RNA isolation for RNA-Seq analyses of archival sera specimens.

    PubMed

    Prendergast, Emily N; de Souza Fonseca, Marcos Abraão; Dezem, Felipe Segato; Lester, Jenny; Karlan, Beth Y; Noushmehr, Houtan; Lin, Xianzhi; Lawrenson, Kate

    2018-01-01

    Exosomes are endosome-derived membrane vesicles that contain proteins, lipids, and nucleic acids. The exosomal transcriptome mediates intercellular communication, and represents an understudied reservoir of novel biomarkers for human diseases. Next-generation sequencing enables complex quantitative characterization of exosomal RNAs from diverse sources. However, detailed protocols describing exosome purification for preparation of exosomal RNA-sequence (RNA-Seq) libraries are lacking. Here we compared methods for isolation of exosomes and extraction of exosomal RNA from human cell-free serum, as well as strategies for attaining equal representation of samples within pooled RNA-Seq libraries. We compared commercial precipitation with ultracentrifugation for exosome purification and confirmed the presence of exosomes via both transmission electron microscopy and immunoblotting. Exosomal RNA extraction was compared using four different RNA purification methods. We determined the minimal starting volume of serum required for exosome preparation and showed that high quality exosomal RNA can be isolated from sera stored for over a decade. Finally, RNA-Seq libraries were successfully prepared with exosomal RNAs extracted from human cell-free serum, cataloguing both coding and non-coding exosomal transcripts. This method provides researchers with strategic options to prepare RNA-Seq libraries and compare RNA-Seq data quantitatively from minimal volumes of fresh and archival human cell-free serum for disease biomarker discovery.

  4. DNA-Encoded Chemical Libraries: A Selection System Based on Endowing Organic Compounds with Amplifiable Information.

    PubMed

    Neri, Dario; Lerner, Richard A

    2018-06-20

    The discovery of organic ligands that bind specifically to proteins is a central problem in chemistry, biology, and the biomedical sciences. The encoding of individual organic molecules with distinctive DNA tags, serving as amplifiable identification bar codes, allows the construction and screening of combinatorial libraries of unprecedented size, thus facilitating the discovery of ligands to many different protein targets. Fundamentally, one links powers of genetics and chemical synthesis. After the initial description of DNA-encoded chemical libraries in 1992, several experimental embodiments of the technology have been reduced to practice. This review provides a historical account of important milestones in the development of DNA-encoded chemical libraries, a survey of relevant ongoing research activities, and a glimpse into the future.

  5. Fully-Implicit Navier-Stokes (FIN-S)

    NASA Technical Reports Server (NTRS)

    Kirk, Benjamin S.

    2010-01-01

    FIN-S is a SUPG finite element code for flow problems under active development at NASA Lyndon B. Johnson Space Center and within PECOS: a) The code is built on top of the libMesh parallel, adaptive finite element library. b) The initial implementation of the code targeted supersonic/hypersonic laminar calorically perfect gas flows & conjugate heat transfer. c) Initial extension to thermochemical nonequilibrium about 9 months ago. d) The technologies in FIN-S have been enhanced through a strongly collaborative research effort with Sandia National Labs.

  6. BADGER v1.0: A Fortran equation of state library

    NASA Astrophysics Data System (ADS)

    Heltemes, T. A.; Moses, G. A.

    2012-12-01

    The BADGER equation of state library was developed to enable inertial confinement fusion plasma codes to more accurately model plasmas in the high-density, low-temperature regime. The code had the capability to calculate 1- and 2-T plasmas using the Thomas-Fermi model and an individual electron accounting model. Ion equation of state data can be calculated using an ideal gas model or via a quotidian equation of state with scaled binding energies. Electron equation of state data can be calculated via the ideal gas model or with an adaptation of the screened hydrogenic model with ℓ-splitting. The ionization and equation of state calculations can be done in local thermodynamic equilibrium or in a non-LTE mode using a variant of the Busquet equivalent temperature method. The code was written as a stand-alone Fortran library for ease of implementation by external codes. EOS results for aluminum are presented that show good agreement with the SESAME library and ionization calculations show good agreement with the FLYCHK code. Program summaryProgram title: BADGERLIB v1.0 Catalogue identifier: AEND_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEND_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 41 480 No. of bytes in distributed program, including test data, etc.: 2 904 451 Distribution format: tar.gz Programming language: Fortran 90. Computer: 32- or 64-bit PC, or Mac. Operating system: Windows, Linux, MacOS X. RAM: 249.496 kB plus 195.630 kB per isotope record in memory Classification: 19.1, 19.7. Nature of problem: Equation of State (EOS) calculations are necessary for the accurate simulation of high energy density plasmas. Historically, most EOS codes used in these simulations have relied on an ideal gas model. This model is inadequate for low-temperature, high-density plasma conditions; the gaseous and liquid phases; and the solid phase. The BADGER code was developed to give more realistic EOS data in these regimes. Solution method: BADGER has multiple, user-selectable models to treat the ions, average-atom ionization state and electrons. Ion models are ideal gas and quotidian equation of state (QEOS), ionization models are Thomas-Fermi and individual accounting method (IEM) formulation of the screened hydrogenic model (SHM) with l-splitting, electron ionization models are ideal gas and a Helmholtz free energy minimization method derived from the SHM. The default equation of state and ionization models are appropriate for plasmas in local thermodynamic equilibrium (LTE). The code can calculate non-LTE equation of state (EOS) and ionization data using a simplified form of the Busquet equivalent-temperature method. Restrictions: Physical data are only provided for elements Z=1 to Z=86. Multiple solid phases are not currently supported. Liquid, gas and plasma phases are combined into a generalized "fluid" phase. Unusual features: BADGER divorces the calculation of average-atom ionization from the electron equation of state model, allowing the user to select ionization and electron EOS models that are most appropriate to the simulation. The included ion ideal gas model uses ground-state nuclear spin data to differentiate between isotopes of a given element. Running time: Example provided only takes a few seconds to run.

  7. The Primary Care Electronic Library: RSS feeds using SNOMED-CT indexing for dynamic content delivery.

    PubMed

    Robinson, Judas; de Lusignan, Simon; Kostkova, Patty; Madge, Bruce; Marsh, A; Biniaris, C

    2006-01-01

    Rich Site Summary (RSS) feeds are a method for disseminating and syndicating the contents of a website using extensible mark-up language (XML). The Primary Care Electronic Library (PCEL) distributes recent additions to the site in the form of an RSS feed. When new resources are added to PCEL, they are manually assigned medical subject headings (MeSH terms), which are then automatically mapped to SNOMED-CT terms using the Unified Medical Language System (UMLS) Metathesaurus. The library is thus searchable using MeSH or SNOMED-CT. Our syndicate partner wished to have remote access to PCEL coronary heart disease (CHD) information resources based on SNOMED-CT search terms. To pilot the supply of relevant information resources in response to clinically coded requests, using RSS syndication for transmission between web servers. Our syndicate partner provided a list of CHD SNOMED-CT terms to its end-users, a list which was coded according to UMLS specifications. When the end-user requested relevant information resources, this request was relayed from our syndicate partner's web server to the PCEL web server. The relevant resources were retrieved from the PCEL MySQL database. This database is accessed using a server side scripting language (PHP), which enables the production of dynamic RSS feeds on the basis of Source Asserted Identifiers (CODEs) contained in UMLS. Retrieving resources using SNOMED-CT terms using syndication can be used to build a functioning application. The process from request to display of syndicated resources took less than one second. The results of the pilot illustrate that it is possible to exchange data between servers using RSS syndication. This method could be utilised dynamically to supply digital library resources to a clinical system with SNOMED-CT data used as the standard of reference.

  8. Initial Ada components evaluation

    NASA Technical Reports Server (NTRS)

    Moebes, Travis

    1989-01-01

    The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.

  9. Interferenceless coded aperture correlation holography-a new technique for recording incoherent digital holograms without two-wave interference.

    PubMed

    Vijayakumar, A; Rosen, Joseph

    2017-06-12

    Recording digital holograms without wave interference simplifies the optical systems, increases their power efficiency and avoids complicated aligning procedures. We propose and demonstrate a new technique of digital hologram acquisition without two-wave interference. Incoherent light emitted from an object propagates through a random-like coded phase mask and recorded directly without interference by a digital camera. In the training stage of the system, a point spread hologram (PSH) is first recorded by modulating the light diffracted from a point object by the coded phase masks. At least two different masks should be used to record two different intensity distributions at all possible axial locations. The various recorded patterns at every axial location are superposed in the computer to obtain a complex valued PSH library cataloged to its axial location. Following the training stage, an object is placed within the axial boundaries of the PSH library and the light diffracted from the object is once again modulated by the same phase masks. The intensity patterns are recorded and superposed exactly as the PSH to yield a complex hologram of the object. The object information at any particular plane is reconstructed by a cross-correlation between the complex valued hologram and the appropriate element of the PSH library. The characteristics and the performance of the proposed system were compared with an equivalent regular imaging system.

  10. Development of a Run Time Math Library for the 1750A Airborne Microcomputer.

    DTIC Science & Technology

    1985-12-01

    premiue CWUTLDK Is R: Integer :a 0; 0: Integer :ul; LNMM: UEM; -Compute the Lado (alpii) for J In 0..Ol.K-1) loop Itf 0(14 1)/ 0. 0...ORGANIZATION (If appiicable) * School of Engineering AFIT/ ENC 6c. ADDRESS (City, State and ZIP Code) 7b. ADDRESS (City. State and ZIP Code) Air Force

  11. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  12. 25 CFR 36.104 - What are the requirements for heating, ventilation, cooling and lighting at dormitories?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... building codes in the Bureau of Indian Affairs “School Facilities Design Handbook,” dated March 30, 2007... any proposal to change which building codes are included in the Bureau of Indian Affairs “School... inspect the Handbook at the Department of the Interior Library, Main Interior Building, 1849 C Street NW...

  13. 25 CFR 36.104 - What are the requirements for heating, ventilation, cooling and lighting at dormitories?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... building codes in the Bureau of Indian Affairs “School Facilities Design Handbook,” dated March 30, 2007... any proposal to change which building codes are included in the Bureau of Indian Affairs “School... inspect the Handbook at the Department of the Interior Library, Main Interior Building, 1849 C Street NW...

  14. Three-Dimensional, Primitive-Variable Model for Solid-Fuel Ramjet Combustion.

    DTIC Science & Technology

    1984-02-01

    INITIAL DISTRIBUTION LIST ,jo. of Copies 1. Library, Code 0212 2 Dean of Research, Code 012 2 Naval Postgraduate School Monterey, CA 93943 2...Dunlap I G. Jensen I P. Willoughby I P. LaForce 7. Chemical Propulsion Information Agency 2 APL-JHU Johns Hopkins Road Laurel, MD 20810 8. AFAPL 2 Wright-Patterson AFB, OH 45433 R. 0. Stull 19

  15. Phonics, Spelling, and Word Study: A Sensible Approach. The Bill Harp Professional Teachers Library Series.

    ERIC Educational Resources Information Center

    Glazer, Susan Mandel

    This concise book shares several sensible, logical, and meaningful approaches that guide young children to use the written coding system to read, spell, and make meaning of the English language coding system. The book demonstrates that phonics, spelling, and word study are essential parts of literacy learning. After an introduction, chapters are:…

  16. Guide to the TANDEM System for the Modern Languages Department Tape Library: A Non-Technical Guide for Teachers.

    ERIC Educational Resources Information Center

    Hounsell, D.; And Others

    This guide for teachers to the tape indexing system (TANDEM) in use at the Modern Languages Department at Portsmouth Polytechnic focuses on tape classification, numbering, labeling, and shelving system procedures. The appendixes contain information on: (1) the classification system and related codes, (2) color and letter codes, (3) marking of tape…

  17. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  18. RNA-seq reveals distinctive RNA profiles of small extracellular vesicles from different human liver cancer cell lines

    PubMed Central

    Berardocco, Martina; Radeghieri, Annalisa; Busatto, Sara; Gallorini, Marialucia; Raggi, Chiara; Gissi, Clarissa; D’Agnano, Igea; Bergese, Paolo; Felsani, Armando; Berardi, Anna C.

    2017-01-01

    Liver cancer (LC) is one of the most common cancers and represents the third highest cause of cancer-related deaths worldwide. Extracellular vesicle (EVs) cargoes, which are selectively enriched in RNA, offer great promise for the diagnosis, prognosis and treatment of LC. Our study analyzed the RNA cargoes of EVs derived from 4 liver-cancer cell lines: HuH7, Hep3B, HepG2 (hepato-cellular carcinoma) and HuH6 (hepatoblastoma), generating two different sets of sequencing libraries for each. One library was size-selected for small RNAs and the other targeted the whole transcriptome. Here are reported genome wide data of the expression level of coding and non-coding transcripts, microRNAs, isomiRs and snoRNAs providing the first comprehensive overview of the extracellular-vesicle RNA cargo released from LC cell lines. The EV-RNA expression profiles of the four liver cancer cell lines share a similar background, but cell-specific features clearly emerge showing the marked heterogeneity of the EV-cargo among the individual cell lines, evident both for the coding and non-coding RNA species. PMID:29137313

  19. Stochastic hyperfine interactions modeling library

    NASA Astrophysics Data System (ADS)

    Zacate, Matthew O.; Evenson, William E.

    2011-04-01

    The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When interactions fluctuate at rates comparable to the time scale of a hyperfine method, there is a loss in signal coherence, and spectra are damped. The degree of damping can be used to determine fluctuation rates, provided that theoretical expressions for spectra can be derived for relevant physical models of the fluctuations. SHIML provides routines to help researchers quickly develop code to incorporate stochastic models of fluctuating hyperfine interactions in calculations of hyperfine spectra. Solution method: Calculations are based on the method for modeling stochastic hyperfine interactions for PAC by Winkler and Gerdau [5]. The method is extended to include other hyperfine methods following the work of Dattagupta [6]. The code provides routines for reading model information from text files, allowing researchers to develop new models quickly without the need to modify computer code for each new model to be considered. Restrictions: In the present version of the code, only methods that measure the hyperfine interaction on one probe spin state, such as PAC, μSR, and NMR, are supported. Running time: Varies

  20. Testing of ENDF71x: A new ACE-formatted neutron data library based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardiner, S. J.; Conlin, J. L.; Kiedrowski, B. C.

    The ENDF71x library [1] is the most thoroughly tested set of ACE-format data tables ever released by the Nuclear Data Team at Los Alamos National Laboratory (LANL). It is based on ENDF/B-VII. 1, the most recently released set of evaluated nuclear data files produced by the US Cross Section Evaluation Working Group (CSEWG). A variety of techniques were used to test and verify the ENDF7 1x library before its public release. These include the use of automated checking codes written by members of the Nuclear Data Team, visual inspections of key neutron data, MCNP6 calculations designed to test data formore » every included combination of isotope and temperature as comprehensively as possible, and direct comparisons between ENDF71x and previous ACE library releases. Visual inspection of some of the most important neutron data revealed energy balance problems and unphysical discontinuities in the cross sections for some nuclides. Doppler broadening of the total cross sections with increasing temperature was found to be qualitatively correct. Test calculations performed using MCNP prompted two modifications to the MCNP6 source code and also exposed bad secondary neutron yields for {sup 231,233}Pa that are present in both ENDF/B-VII.1 and ENDF/B-VII.0. A comparison of ENDF71x with its predecessor ACE library, ENDF70, showed that dramatic changes have been made in the neutron cross section data for a number of isotopes between ENDF/B-VII.0 and ENDF/B-VII.1. Based on the results of these verification tests and the validation tests performed by Kahler, et al. [2], the ENDF71x library is recommended for use in all Monte Carlo applications. (authors)« less

  1. VizieR Online Data Catalog: A library of high-S/N optical spectra of FGKM stars (Yee+, 2017)

    NASA Astrophysics Data System (ADS)

    Yee, S. W.; Petigura, E. A.; von Braun, K.

    2017-09-01

    Classification of stars, by comparing their optical spectra to a few dozen spectral standards, has been a workhorse of observational astronomy for more than a century. Here, we extend this technique by compiling a library of optical spectra of 404 touchstone stars observed with Keck/HIRES by the California Planet Search. The spectra have high resolution (R~60000), high signal-to-noise ratio (S/N~150/pixel), and are registered onto a common wavelength scale. The library stars have properties derived from interferometry, asteroseismology, LTE spectral synthesis, and spectrophotometry. To address a lack of well-characterized late-K dwarfs in the literature, we measure stellar radii and temperatures for 23 nearby K dwarfs, using modeling of the spectral energy distribution and Gaia parallaxes. This library represents a uniform data set spanning the spectral types ~M5-F1 (Teff~3000-7000K, R*~0.1-16R{Sun}). We also present "Empirical SpecMatch" (SpecMatch-Emp), a tool for parameterizing unknown spectra by comparing them against our spectral library. For FGKM stars, SpecMatch-Emp achieves accuracies of 100K in effective temperature (Teff), 15% in stellar radius (R*), and 0.09dex in metallicity ([Fe/H]). Because the code relies on empirical spectra it performs particularly well for stars ~K4 and later, which are challenging to model with existing spectral synthesizers, reaching accuracies of 70K in Teff, 10% in R*, and 0.12dex in [Fe/H]. We also validate the performance of SpecMatch-Emp, finding it to be robust at lower spectral resolution and S/N, enabling the characterization of faint late-type stars. Both the library and stellar characterization code are publicly available. (2 data files).

  2. Precision Stellar Characterization of FGKM Stars using an Empirical Spectral Library

    NASA Astrophysics Data System (ADS)

    Yee, Samuel W.; Petigura, Erik A.; von Braun, Kaspar

    2017-02-01

    Classification of stars, by comparing their optical spectra to a few dozen spectral standards, has been a workhorse of observational astronomy for more than a century. Here, we extend this technique by compiling a library of optical spectra of 404 touchstone stars observed with Keck/HIRES by the California Planet Search. The spectra have high resolution (R ≈ 60,000), high signal-to-noise ratio (S/N ≈ 150/pixel), and are registered onto a common wavelength scale. The library stars have properties derived from interferometry, asteroseismology, LTE spectral synthesis, and spectrophotometry. To address a lack of well-characterized late-K dwarfs in the literature, we measure stellar radii and temperatures for 23 nearby K dwarfs, using modeling of the spectral energy distribution and Gaia parallaxes. This library represents a uniform data set spanning the spectral types ˜M5-F1 (T eff ≈ 3000-7000 K, R ⋆ ≈ 0.1-16 R ⊙). We also present “Empirical SpecMatch” (SpecMatch-Emp), a tool for parameterizing unknown spectra by comparing them against our spectral library. For FGKM stars, SpecMatch-Emp achieves accuracies of 100 K in effective temperature (T eff), 15% in stellar radius (R ⋆), and 0.09 dex in metallicity ([Fe/H]). Because the code relies on empirical spectra it performs particularly well for stars ˜K4 and later, which are challenging to model with existing spectral synthesizers, reaching accuracies of 70 K in T eff, 10% in R ⋆, and 0.12 dex in [Fe/H]. We also validate the performance of SpecMatch-Emp, finding it to be robust at lower spectral resolution and S/N, enabling the characterization of faint late-type stars. Both the library and stellar characterization code are publicly available.

  3. A Review of Lightweight Thread Approaches for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Adrian; Pena, Antonio J.; Seo, Sangmin

    High-level, directive-based solutions are becoming the programming models (PMs) of the multi/many-core architectures. Several solutions relying on operating system (OS) threads perfectly work with a moderate number of cores. However, exascale systems will spawn hundreds of thousands of threads in order to exploit their massive parallel architectures and thus conventional OS threads are too heavy for that purpose. Several lightweight thread (LWT) libraries have recently appeared offering lighter mechanisms to tackle massive concurrency. In order to examine the suitability of LWTs in high-level runtimes, we develop a set of microbenchmarks consisting of commonlyfound patterns in current parallel codes. Moreover, wemore » study the semantics offered by some LWT libraries in order to expose the similarities between different LWT application programming interfaces. This study reveals that a reduced set of LWT functions can be sufficient to cover the common parallel code patterns and that those LWT libraries perform better than OS threads-based solutions in cases where task and nested parallelism are becoming more popular with new architectures.« less

  4. Jannovar: a java library for exome annotation.

    PubMed

    Jäger, Marten; Wang, Kai; Bauer, Sebastian; Smedley, Damian; Krawitz, Peter; Robinson, Peter N

    2014-05-01

    Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar. © 2014 WILEY PERIODICALS, INC.

  5. Fuel Performance Calculations for FeCrAl Cladding in BWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Nathan; Sweet, Ryan; Maldonado, G. Ivan

    2015-01-01

    This study expands upon previous neutronics analyses of the reactivity impact of alternate cladding concepts in boiling water reactor (BWR) cores and directs focus toward contrasting fuel performance characteristics of FeCrAl cladding against those of traditional Zircaloy. Using neutronics results from a modern version of the 3D nodal simulator NESTLE, linear power histories were generated and supplied to the BISON-CASL code for fuel performance evaluations. BISON-CASL (formerly Peregrine) expands on material libraries implemented in the BISON fuel performance code and the MOOSE framework by providing proprietary material data. By creating material libraries for Zircaloy and FeCrAl cladding, the thermomechanical behaviormore » of the fuel rod (e.g., strains, centerline fuel temperature, and time to gap closure) were investigated and contrasted.« less

  6. One-Bead-Two-Compound Thioether Bridged Macrocyclic γ-AApeptide Screening Library against EphA2.

    PubMed

    Shi, Yan; Challa, Sridevi; Sang, Peng; She, Fengyu; Li, Chunpu; Gray, Geoffrey M; Nimmagadda, Alekhya; Teng, Peng; Odom, Timothy; Wang, Yan; van der Vaart, Arjan; Li, Qi; Cai, Jianfeng

    2017-11-22

    Identification of molecular ligands that recognize peptides or proteins is significant but poses a fundamental challenge in chemical biology and biomedical sciences. Development of cyclic peptidomimetic library is scarce, and thus discovery of cyclic peptidomimetic ligands for protein targets is rare. Herein we report the unprecedented one-bead-two-compound (OBTC) combinatorial library based on a novel class of the macrocyclic peptidomimetics γ-AApeptides. In the library, we utilized the coding peptide tags synthesized with Dde-protected α-amino acids, which were orthogonal to solid phase synthesis of γ-AApeptides. Employing the thioether linkage, the desired macrocyclic γ-AApeptides were found to be effective for ligand identification. Screening the library against the receptor tyrosine kinase EphA2 led to the discovery of one lead compound that tightly bound to EphA2 (K d = 81 nM) and potently antagonized EphA2-mediated signaling. This new approach of macrocyclic peptidomimetic library may lead to a novel platform for biomacromolecular surface recognition and function modulation.

  7. Reducing codon redundancy and screening effort of combinatorial protein libraries created by saturation mutagenesis.

    PubMed

    Kille, Sabrina; Acevedo-Rocha, Carlos G; Parra, Loreto P; Zhang, Zhi-Gang; Opperman, Diederik J; Reetz, Manfred T; Acevedo, Juan Pablo

    2013-02-15

    Saturation mutagenesis probes define sections of the vast protein sequence space. However, even if randomization is limited this way, the combinatorial numbers problem is severe. Because diversity is created at the codon level, codon redundancy is a crucial factor determining the necessary effort for library screening. Additionally, due to the probabilistic nature of the sampling process, oversampling is required to ensure library completeness as well as a high probability to encounter all unique variants. Our trick employs a special mixture of three primers, creating a degeneracy of 22 unique codons coding for the 20 canonical amino acids. Therefore, codon redundancy and subsequent screening effort is significantly reduced, and a balanced distribution of codon per amino acid is achieved, as demonstrated exemplarily for a library of cyclohexanone monooxygenase. We show that this strategy is suitable for any saturation mutagenesis methodology to generate less-redundant libraries.

  8. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  9. Framework GRASP: routine library for optimize processing of aerosol remote sensing observation

    NASA Astrophysics Data System (ADS)

    Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian

    The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.

  10. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  11. Toward a New Evaluation of Neutron Standards

    DOE PAGES

    Carlson, Allan D.; Pronyaev, Vladimir G.; Capote, Roberto; ...

    2016-02-03

    Measurements related to neutron cross section standards and certain prompt neutron fission spectra are being evaluated. In addition to the standard cross sections, investigations of reference data that are not as well known as the standards are being considered. We discuss procedures and codes for performing this work. A number of libraries will use the results of this standards evaluation for new versions of their libraries. Most of these data have applications in neutron dosimetry.

  12. Covariance Applications in Criticality Safety, Light Water Reactor Analysis, and Spent Fuel Characterization

    DOE PAGES

    Williams, M. L.; Wiarda, D.; Ilas, G.; ...

    2014-06-15

    Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.

  13. Common Pitfalls in F77 Code Conversion

    DTIC Science & Technology

    2003-02-01

    implementation versus another are the source of these errors rather than typography . It is well to use the practice of commenting-out original source file lines...identifier), every I in the format field must be replaced with f followed by an appropriate floating point format designator . Floating point numeric...helps even more. Finally, libraries are a major source of non-portablility[sic], with graphics libraries one of the chief culprits. We in Fusion

  14. MpTheory Java library: a multi-platform Java library for systems biology based on the Metabolic P theory.

    PubMed

    Marchetti, Luca; Manca, Vincenzo

    2015-04-15

    MpTheory Java library is an open-source project collecting a set of objects and algorithms for modeling observed dynamics by means of the Metabolic P (MP) theory, that is, a mathematical theory introduced in 2004 for modeling biological dynamics. By means of the library, it is possible to model biological systems both at continuous and at discrete time. Moreover, the library comprises a set of regression algorithms for inferring MP models starting from time series of observations. To enhance the modeling experience, beside a pure Java usage, the library can be directly used within the most popular computing environments, such as MATLAB, GNU Octave, Mathematica and R. The library is open-source and licensed under the GNU Lesser General Public License (LGPL) Version 3.0. Source code, binaries and complete documentation are available at http://mptheory.scienze.univr.it. luca.marchetti@univr.it, marchetti@cosbi.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU data processing time)

  16. PCM Thermal Control of Nickel-Hydrogen Batteries

    DTIC Science & Technology

    1993-06-01

    Iridium , Global Star, etc - The new satellite mobile telephone systems under development call for constellations of LEO satellites. A thermal problem unique...C6H4CI2 -16.7 88 2 4,6-dimethylindan C11H14 -16.7 88 3 2,2-dimethylpropane C5H12 -16.6 45 4 arsenic trichloride AsCl3 -16 56 5 quinoline C9H7N -15.6 84 6...discharge are: 0 SPACE-BASED RADAR - SBR is expected to have a surge power lasting about 9 minutes. 0 IRIDIUM - The high traffic associated with

  17. astroplan: Observation planning package for astronomers

    NASA Astrophysics Data System (ADS)

    Morris, Brett M.; Tollerud, Erik; Sipocz, Brigitta; Deil, Christoph; Douglas, Stephanie T.; Berlanga Medina, Jazmin; Vyhmeister, Karl; Price-Whelan, Adrian M.; Jeschke, Eric

    2018-02-01

    astroplan is a flexible toolbox for observation planning and scheduling. It is powered by Astropy (ascl:1304.002); it works for Python beginners and new observers, and is powerful enough for observatories preparing nightly and long-term schedules as well. It calculates rise/set/meridian transit times, alt/az positions for targets at observatories anywhere on Earth, and offers built-in plotting convenience functions for standard observation planning plots (airmass, parallactic angle, sky maps). It can also determine the observability of sets of targets given an arbitrary set of constraints (i.e., altitude, airmass, moon separation/illumination, etc.).

  18. NDL-v2.0: A new version of the numerical differentiation library for parallel architectures

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.

    2014-07-01

    We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.

  19. American Academy of Home Care Medicine

    MedlinePlus

    ... Providers) House Call Coding 101 (Coders) Moving to Value and CPC+ Resource Library ... IAHnow.org to contact your senators. Visit IAHnow.org Education AAHCM offers educational resources such as webinars and ...

  20. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  1. Construction of a Full-Length Enriched cDNA Library and Preliminary Analysis of Expressed Sequence Tags from Bengal Tiger Panthera tigris tigris

    PubMed Central

    Liu, Changqing; Liu, Dan; Guo, Yu; Lu, Taofeng; Li, Xiangchen; Zhang, Minghai; Ma, Jianzhang; Ma, Yuehui; Guan, Weijun

    2013-01-01

    In this study, a full-length enriched cDNA library was successfully constructed from Bengal tiger, Panthera tigris tigris, the most well-known wild Animal. Total RNA was extracted from cultured Bengal tiger fibroblasts in vitro. The titers of primary and amplified libraries were 1.28 × 106 pfu/mL and 1.56 × 109 pfu/mL respectively. The percentage of recombinants from unamplified library was 90.2% and average length of exogenous inserts was 0.98 kb. A total of 212 individual ESTs with sizes ranging from 356 to 1108 bps were then analyzed. The BLASTX score revealed that 48.1% of the sequences were classified as a strong match, 45.3% as nominal and 6.6% as a weak match. Among the ESTs with known putative function, 26.4% ESTs were found to be related to all kinds of metabolisms, 19.3% ESTs to information storage and processing, 11.3% ESTs to posttranslational modification, protein turnover, chaperones, 11.3% ESTs to transport, 9.9% ESTs to signal transducer/cell communication, 9.0% ESTs to structure protein, 3.8% ESTs to cell cycle, and only 6.6% ESTs classified as novel genes. By EST sequencing, a full-length gene coding ferritin was identified and characterized. The recombinant plasmid pET32a-TAT-Ferritin was constructed, coded for the TAT-Ferritin fusion protein with two 6× His-tags in N and C-terminal. After BCA assay, the concentration of soluble Trx-TAT-Ferritin recombinant protein was 2.32 ± 0.12 mg/mL. These results demonstrated that the reliability and representativeness of the cDNA library attained to the requirements of a standard cDNA library. This library provided a useful platform for the functional genome and transcriptome research of Bengal tigers. PMID:23708105

  2. Construction of a full-length enriched cDNA library and preliminary analysis of expressed sequence tags from Bengal Tiger Panthera tigris tigris.

    PubMed

    Liu, Changqing; Liu, Dan; Guo, Yu; Lu, Taofeng; Li, Xiangchen; Zhang, Minghai; Ma, Jianzhang; Ma, Yuehui; Guan, Weijun

    2013-05-24

    In this study, a full-length enriched cDNA library was successfully constructed from Bengal tiger, Panthera tigris tigris, the most well-known wild Animal. Total RNA was extracted from cultured Bengal tiger fibroblasts in vitro. The titers of primary and amplified libraries were 1.28 × 106 pfu/mL and 1.56 × 109 pfu/mL respectively. The percentage of recombinants from unamplified library was 90.2% and average length of exogenous inserts was 0.98 kb. A total of 212 individual ESTs with sizes ranging from 356 to 1108 bps were then analyzed. The BLASTX score revealed that 48.1% of the sequences were classified as a strong match, 45.3% as nominal and 6.6% as a weak match. Among the ESTs with known putative function, 26.4% ESTs were found to be related to all kinds of metabolisms, 19.3% ESTs to information storage and processing, 11.3% ESTs to posttranslational modification, protein turnover, chaperones, 11.3% ESTs to transport, 9.9% ESTs to signal transducer/cell communication, 9.0% ESTs to structure protein, 3.8% ESTs to cell cycle, and only 6.6% ESTs classified as novel genes. By EST sequencing, a full-length gene coding ferritin was identified and characterized. The recombinant plasmid pET32a-TAT-Ferritin was constructed, coded for the TAT-Ferritin fusion protein with two 6× His-tags in N and C-terminal. After BCA assay, the concentration of soluble Trx-TAT-Ferritin recombinant protein was 2.32 ± 0.12 mg/mL. These results demonstrated that the reliability and representativeness of the cDNA library attained to the requirements of a standard cDNA library. This library provided a useful platform for the functional genome and transcriptome research of Bengal tigers.

  3. The tensor network theory library

    NASA Astrophysics Data System (ADS)

    Al-Assam, S.; Clark, S. R.; Jaksch, D.

    2017-09-01

    In this technical paper we introduce the tensor network theory (TNT) library—an open-source software project aimed at providing a platform for rapidly developing robust, easy to use and highly optimised code for TNT calculations. The objectives of this paper are (i) to give an overview of the structure of TNT library, and (ii) to help scientists decide whether to use the TNT library in their research. We show how to employ the TNT routines by giving examples of ground-state and dynamical calculations of one-dimensional bosonic lattice system. We also discuss different options for gaining access to the software available at www.tensornetworktheory.org.

  4. Cloning and characterization of a novel α-amylase from a fecal microbial metagenome.

    PubMed

    Xu, Bo; Yang, Fuya; Xiong, Caiyun; Li, Junjun; Tang, Xianghua; Zhou, Junpei; Xie, Zhenrong; Ding, Junmei; Yang, Yunjuan; Huang, Zunxi

    2014-04-01

    To isolate novel and useful microbial enzymes from uncultured gastrointestinal microorganisms, a fecal microbial metagenomic library of the pygmy loris was constructed. The library was screened for amylolytic activity, and 8 of 50,000 recombinant clones showed amylolytic activity. Subcloning and sequence analysis of a positive clone led to the identification a novel gene (amyPL) coding for α-amylase. AmyPL was expressed in Escherichia coli BL21 (DE3) and the purified AmyPL was enzymatically characterized. This study is the first to report the molecular and biochemical characterization of a novel α-amylase from a gastrointestinal metagenomic library.

  5. Library workers' personal beliefs about childhood vaccination and vaccination information provision*

    PubMed Central

    Keselman, Alla; Smith, Catherine Arnott; Hundal, Savreen

    2014-01-01

    This is a report on the impact of library workers' personal beliefs on provision of vaccination information. Nine public librarians were interviewed about a hypothetical scenario involving a patron who is concerned about possible vaccination-autism connections. The analysis employed thematic coding. Results suggested that while most participants supported childhood vaccination, tension between their personal views and neutrality impacted their ability to conduct the interaction. The neutrality stance, though consonant with professional guidelines, curtails librarians' ability to provide accurate health information. Outreach and communication between public and health sciences libraries can help librarians provide resources to address health controversies. PMID:25031563

  6. Mining Software Usage with the Automatic Library Tracking Database (ALTD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadri, Bilel; Fahey, Mark R

    2013-01-01

    Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batchmore » job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.« less

  7. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  8. EXODUS II: A finite element data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  9. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  10. PROTEUS-SN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shemon, Emily R.; Smith, Micheal A.; Lee, Changho

    2016-02-16

    PROTEUS-SN is a three-dimensional, highly scalable, high-fidelity neutron transport code developed at Argonne National Laboratory. The code is applicable to all spectrum reactor transport calculations, particularly those in which a high degree of fidelity is needed either to represent spatial detail or to resolve solution gradients. PROTEUS-SN solves the second order formulation of the transport equation using the continuous Galerkin finite element method in space, the discrete ordinates approximation in angle, and the multigroup approximation in energy. PROTEUS-SN’s parallel methodology permits the efficient decomposition of the problem by both space and angle, permitting large problems to run efficiently on hundredsmore » of thousands of cores. PROTEUS-SN can also be used in serial or on smaller compute clusters (10’s to 100’s of cores) for smaller homogenized problems, although it is generally more computationally expensive than traditional homogenized methodology codes. PROTEUS-SN has been used to model partially homogenized systems, where regions of interest are represented explicitly and other regions are homogenized to reduce the problem size and required computational resources. PROTEUS-SN solves forward and adjoint eigenvalue problems and permits both neutron upscattering and downscattering. An adiabatic kinetics option has recently been included for performing simple time-dependent calculations in addition to standard steady state calculations. PROTEUS-SN handles void and reflective boundary conditions. Multigroup cross sections can be generated externally using the MC2-3 fast reactor multigroup cross section generation code or internally using the cross section application programming interface (API) which can treat the subgroup or resonance table libraries. PROTEUS-SN is written in Fortran 90 and also includes C preprocessor definitions. The code links against the PETSc, METIS, HDF5, and MPICH libraries. It optionally links against the MOAB library and is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.« less

  11. LIBMAKER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-08-01

    Version 00 COG LibMaker contains various utilities to convert common data formats into a format usable by the COG - Multi-particle Monte Carlo Code System package, (C00777MNYCP01). Utilities included: ACEtoCOG - ACE formatted neutron data: Currently ENDFB7R0.BNL, ENDFB7R1.BNL, JEFF3.1, JEFF3.1.1, JEFF3.1.2, MCNP.50c, MCNP.51c, MCNP.55c, MCNP.66c, and MCNP.70c. ACEUtoCOG - ACEU formatted photonuclear data: Currently PN.MCNP.30c and PN.MCNP.70u. ACTLtoCOG - Creates a COG library from ENDL formatted activation data COG library. EDDLtoCOG - Creates a COG library from ENDL formatted LLNL deuteron data. ENDLtoCOG - Creates a COG library from ENDL formatted LLNL neutron data. EPDLtoCOG - Creates a COG librarymore » from ENDL formatted LLNL photon data. LEX - Creates a COG dictionary file. SAB.ACEtoCOG - Creates a COG library from ACE formatted S(a,b) data. SABtoCOG - Creates a COG library from ENDF6 formatted S(a,b) data. URRtoCOG - Creates a COG library from ACE formatted probability table data. This package also includes library checking and bit swapping capability.« less

  12. PV_LIB Toolbox v. 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.

  13. An Architecture for Coexistence with Multiple Users in Frequency Hopping Cognitive Radio Networks

    DTIC Science & Technology

    2013-03-01

    the base WARP system, a custom IP core written in VHDL , and the Virtex IV’s embedded PowerPC core with C code to implement the radio and hopset...shown in Appendix C as Figure C.2. All VHDL code necessary to implement this IP core is included in Appendix G. 69 Figure 3.19: FPGA bus structure...subsystem functionality. A total of 1,430 lines of VHDL code were implemented for this research. 1 library ieee; 2 use ieee.std logic 1164.all; 3 use

  14. PELEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.

  15. Arctic Ice Dynamics Joint Experiment 1975-1976. Physical Oceanography Data Report, Salinity, Temperature and Depth Data, Camp Blue Fox. Volume II.

    DTIC Science & Technology

    1980-02-01

    to LM b. a w ewe%- ww re mOOc 4" o 0.NWmotvviiOf wt 00 f4Crfl ft -wm o.e. &*1 NO P..w N N o%9 a in - - -da inN 4p m a - U . .......0...V N m...200 1 Attn: Code 428AR 3 Attn: Code 420 a Director Naval Research Laboratory Washington, D.C. 20375 Attn: Library . Code 2620 1 U.S. Naval Research

  16. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  17. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  18. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  19. Library

    Science.gov Websites

    United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes

  20. Designing to Meet New Requirements of Differing Services.

    ERIC Educational Resources Information Center

    Mathers, Andrew S.

    1982-01-01

    Characterizing "older library buildings" as those built prior to 1960, this article discusses special problems and challenges for the librarian and architect renovator, including building codes and new requirements of differing services. (EJS)

  1. COSMOS: Python library for massively parallel workflows

    PubMed Central

    Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.

    2014-01-01

    Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428

  2. Calculations of the skyshine gamma-ray dose rates from independent spent fuel storage installations (ISFSI) under worst case accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, J.V. III; Cramer, S.N.; Knight, J.R.

    1980-09-01

    Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less

  3. COSMOS: Python library for massively parallel workflows.

    PubMed

    Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J

    2014-10-15

    Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. EOSlib, Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Nathan; Menikoff, Ralph

    2017-02-03

    Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less

  5. A General Sparse Tensor Framework for Electronic Structure Theory

    DOE PAGES

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...

    2017-01-24

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less

  6. Beyond terrestrial biology: charting the chemical universe of α-amino acid structures.

    PubMed

    Meringer, Markus; Cleaves, H James; Freeland, Stephen J

    2013-11-25

    α-Amino acids are fundamental to biochemistry as the monomeric building blocks with which cells construct proteins according to genetic instructions. However, the 20 amino acids of the standard genetic code represent a tiny fraction of the number of α-amino acid chemical structures that could plausibly play such a role, both from the perspective of natural processes by which life emerged and evolved, and from the perspective of human-engineered genetically coded proteins. Until now, efforts to describe the structures comprising this broader set, or even estimate their number, have been hampered by the complex combinatorial properties of organic molecules. Here, we use computer software based on graph theory and constructive combinatorics in order to conduct an efficient and exhaustive search of the chemical structures implied by two careful and precise definitions of the α-amino acids relevant to coded biological proteins. Our results include two virtual libraries of α-amino acid structures corresponding to these different approaches, comprising 121 044 and 3 846 structures, respectively, and suggest a simple approach to exploring much larger, as yet uncomputed, libraries of interest.

  7. Functional Programming with C++ Template Metaprograms

    NASA Astrophysics Data System (ADS)

    Porkoláb, Zoltán

    Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.

  8. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  9. Seismo-Live: Training in Seismology with Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Tape, Carl; Igel, Heiner

    2016-04-01

    Seismological training tends to occur within the isolation of a particular institution with a limited set of tools (codes, libraries) that are often not transferrable outside. Here, we propose to overcome these limitations with a community-driven library of Jupyter notebooks dedicated to training on any aspect of seismology for purposes of education and outreach, on-site or archived tutorials for codes, classroom instruction, and research. A Jupyter notebook (jupyter.org) is an open-source interactive computational environment that allows combining code execution, rich text, mathematics, and plotting. It can be considered a platform that supports reproducible research, as all inputs and outputs may be stored. Text, external graphics, equations can be handled using Markdown (incl. LaTeX) format. Jupyter notebooks are driven by standard web browsers, can be easily exchanged in text format, or converted to other documents (e.g. PDF, slide shows). They provide an ideal format for practical training in seismology. A pilot-platform was setup with a dedicated server such that the Jupyter notebooks can be run in any browser (PC, notepad, smartphone). We show the functionalities of the Seismo-Live platform with examples from computational seismology, seismic data access and processing using the ObsPy library, seismic inverse problems, and others. The current examples are all using the Python programming language but any free language can be used. Potentially, such community platforms could be integrated with the EPOS-IT infrastructure and extended to other fields of Earth sciences.

  10. Neutron radiation damage studies in the structural materials of a 500 MWe fast breeder reactor using DPA cross-sections from ENDF / B-VII.1

    NASA Astrophysics Data System (ADS)

    Saha, Uttiyoarnab; Devan, K.; Bachchan, Abhitab; Pandikumar, G.; Ganesan, S.

    2018-04-01

    The radiation damage in the structural materials of a 500 MWe Indian prototype fast breeder reactor (PFBR) is re-assessed by computing the neutron displacement per atom (dpa) cross-sections from the recent nuclear data library evaluated by the USA, ENDF / B-VII.1, wherein revisions were taken place in the new evaluations of basic nuclear data because of using the state-of-the-art neutron cross-section experiments, nuclear model-based predictions and modern data evaluation techniques. An indigenous computer code, computation of radiation damage (CRaD), is developed at our centre to compute primary-knock-on atom (PKA) spectra and displacement cross-sections of materials both in point-wise and any chosen group structure from the evaluated nuclear data libraries. The new radiation damage model, athermal recombination-corrected displacement per atom (arc-dpa), developed based on molecular dynamics simulations is also incorporated in our study. This work is the result of our earlier initiatives to overcome some of the limitations experienced while using codes like RECOIL, SPECTER and NJOY 2016, to estimate radiation damage. Agreement of CRaD results with other codes and ASTM standard for Fe dpa cross-section is found good. The present estimate of total dpa in D-9 steel of PFBR necessitates renormalisation of experimental correlations of dpa and radiation damage to ensure consistency of damage prediction with ENDF / B-VII.1 library.

  11. Specific glycogen synthase kinase-3 inhibition reduces neuroendocrine markers and suppresses neuroblastoma cell growth.

    PubMed

    Carter, Yvette M; Kunnimalaiyaan, Selvi; Chen, Herbert; Gamblin, T Clark; Kunnimalaiyaan, Muthusamy

    2014-05-01

    Neuroblastoma is a common neuroendocrine (NE) tumor that presents in early childhood, with a high incidence of malignancy and recurrence. The glycogen synthase kinase-3 (GSK-3) pathway is a potential therapeutic target, as this pathway has been shown to be crucial in the management of other NE tumors. However, it is not known which isoform is necessary for growth inhibition. In this study, we investigated the effect of the GSK-3 inhibitor AR-A014418 on the different GSK-3 isoforms in neuroblastoma. NGP and SH-5Y-SY cells were treated with 0-20 μM of AR-A014418 and cell viability was measured by MTT assay. Expression levels of NE markers CgA and ASCL1, GSK-3 isoforms, and apoptotic markers were analyzed by western blot. Neuroblastoma cells treated with AR-A014418 had a significant reduction in growth at all doses and time points (P<0.001). A reduction in growth was noted in cell lines on day 6, with 10 μM (NGP-53% vs. 0% and SH-5Y-SY-38% vs. 0%, P<0.001) treatment compared to control, corresponding with a noticeable reduction in tumor marker ASCL1 and CgA expression. Treatment of neuroblastoma cell lines with AR-A014418 reduced the level of GSK-3α phosphorylation at Tyr279 compared to GSK-3β phosphorylation at Tyr216, and attenuated growth via the maintenance of apoptosis. This study supports further investigation to elucidate the mechanism(s) by which GSK-3α inhibition downregulates the expression of NE tumor markers and growth of neuroblastoma.

  12. Csf2 null mutation alters placental gene expression and trophoblast glycogen cell and giant cell abundance in mice.

    PubMed

    Sferruzzi-Perri, Amanda N; Macpherson, Anne M; Roberts, Claire T; Robertson, Sarah A

    2009-07-01

    Genetic deficiency in granulocyte-macrophage colony-stimulating factor (CSF2, GM-CSF) results in altered placental structure in mice. To investigate the mechanism of action of CSF2 in placental morphogenesis, the placental gene expression and cell composition were examined in Csf2 null mutant and wild-type mice. Microarray and quantitative RT-PCR analyses on Embryonic Day (E) 13 placentae revealed that the Csf2 null mutation caused altered expression of 17 genes not previously known to be associated with placental development, including Mid1, Cd24a, Tnfrsf11b, and Wdfy1. Genes controlling trophoblast differentiation (Ascl2, Tcfeb, Itgav, and Socs3) were also differentially expressed. The CSF2 ligand and the CSF2 receptor alpha subunit were predominantly synthesized in the placental junctional zone. Altered placental structure in Csf2 null mice at E15 was characterized by an expanded junctional zone and by increased Cx31(+) glycogen cells and cyclin-dependent kinase inhibitor 1C (CDKN1C(+), P57(Kip2+)) giant cells, accompanied by elevated junctional zone transcription of genes controlling spongiotrophoblast and giant cell differentiation and secretory function (Ascl2, Hand1, Prl3d1, and Prl2c2). Granzyme genes implicated in tissue remodeling and potentially in trophoblast invasion (Gzmc, Gzme, and Gzmf) were downregulated in the junctional zone of Csf2 null mutant placentae. These data demonstrate aberrant placental gene expression in Csf2 null mutant mice that is associated with altered differentiation and/or functional maturation of junctional zone trophoblast lineages, glycogen cells, and giant cells. We conclude that CSF2 is a regulator of trophoblast differentiation and placental development, which potentially influences the functional capacity of the placenta to support optimal fetal growth in pregnancy.

  13. Discovery and validation of methylation markers for endometrial cancer

    PubMed Central

    Wentzensen, Nicolas; Bakkum-Gamez, Jamie N.; Killian, J. Keith; Sampson, Joshua; Guido, Richard; Glass, Andrew; Adams, Lisa; Luhn, Patricia; Brinton, Louise A.; Rush, Brenda; d’Ambrosio, Lori; Gunja, Munira; Yang, Hannah P.; Garcia-Closas, Montserrat; Lacey, James V.; Lissowska, Jolanta; Podratz, Karl; Meltzer, Paul; Shridhar, Viji; Sherman, Mark E.

    2014-01-01

    The prognosis of endometrial cancer is strongly associated with stage at diagnosis, suggesting that early detection may reduce mortality. Women who are diagnosed with endometrial carcinoma often have a lengthy history of vaginal bleeding, which offers an opportunity for early diagnosis and curative treatment. We performed DNA methylation profiling on population-based endometrial cancers to identify early detection biomarkers and replicated top candidates in two independent studies. We compared DNA methylation values of 1500 probes representing 807 genes in 148 population-based endometrial carcinoma samples and 23 benign endometrial tissues. Markers were replicated in another set of 69 carcinomas and 40 benign tissues profiled on the same platform. Further replication was conducted in The Cancer Genome Atlas and in prospectively collected endometrial brushings from women with and without endometrial carcinomas. We identified 114 CpG sites showing methylation differences with p-values of ≤10−7 between endometrial carcinoma and normal endometrium. Eight genes (ADCYAP1, ASCL2, HS3ST2, HTR1B, MME, NPY, and SOX1) were selected for further replication. Age-adjusted odds ratios for endometrial cancer ranged from 3.44 (95%-CI: 1.33–8.91) for ASCL2 to 18.61 (95%-CI: 5.50–62.97) for HTR1B. An area under the curve (AUC) of 0.93 was achieved for discriminating carcinoma from benign endometrium. Replication in The Cancer Genome Atlas and in endometrial brushings from an independent study confirmed the candidate markers. This study demonstrates that methylation markers may be used to evaluate women with abnormal vaginal bleeding to distinguish women with endometrial carcinoma from the majority of women without malignancy. PMID:24623538

  14. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  15. Cloning and sequence determination of the gene coding for the pyruvate phosphate dikinase of Entamoeba histolytica.

    PubMed

    Saavedra-Lira, E; Pérez-Montfort, R

    1994-05-16

    We isolated three overlapping clones from a DNA genomic library of Entamoeba histolytica strain HM1:IMSS, whose translated nucleotide (nt) sequence shows similarities of 51, 48 and 47% with the amino acid (aa) sequences reported for the pyruvate phosphate dikinases from Bacteroides symbiosus, maize and Flaveria trinervia, respectively. The reading frame determined codes for a protein of 886 aa.

  16. Extending Mondrian Memory Protection

    DTIC Science & Technology

    2010-11-01

    a kernel semaphore is locked or unlocked. In addition, we extended the system call interface to receive notifications about user-land locking...operations (such as calls to the mutex and semaphore code provided by the C library). By patching the dynamically loadable GLibC5, we are able to test... semaphores , and spinlocks. RTO-MP-IST-091 10- 9 Extending Mondrian Memory Protection to loading extension plugins. This prevents any untrusted code

  17. Productivity Enhancement Program (PEP) for the Power Plant Division, Naval Air Rework Facility, North Island, San Diego. Preliminary Data Required.

    DTIC Science & Technology

    1984-02-01

    97322 C9 571H0 NOTE 1--Ht» 360214-77 M.ORKID MDRA i D - \\—r— .ofjTxx .05r51-8* 62 ZBJA 0 ZANA D YBFA 0 : : 1.S508-P81 2F-SEE...Navy Recruiting Command (Code 20) Commanding Officer, Naval Aerospace Medical Institute (Library Code 12) (2) Commanding Officer Naval Technical

  18. Nursing staff connect libraries with improving patient care but not with achieving organisational objectives: a grounded theory approach.

    PubMed

    Chamberlain, David; Brook, Richard

    2014-03-01

    Health organisations are often driven by specific targets defined by mission statements, aims and objectives to improve patient care. Health libraries need to demonstrate that they contribute to organisational objectives, but it is not clear how nurses view that contribution. To investigate ward nursing staff motivations, their awareness of ward and organisational objectives; and their attitudes towards the contribution of health library services to improving patient care. Qualitative research using focus group data was combined with content analysis of literature evidence and library statistics (quantitative data). Data were analysed using thematic coding, divided into five group themes: understanding of Trust, Ward and Personal objectives, use of Library, use of other information sources, quality and Issues. Four basic social-psychological processes were then developed. Behaviour indicates low awareness of organisational objectives despite patient-centric motivation. High awareness of library services is shown with some connection made by ward staff between improved knowledge and improved patient care. There was a two-tiered understanding of ward objectives and library services, based on level of seniority. However, evidence-based culture needs to be intrinsic in the organisation before all staff benefit. Libraries can actively engage in this at ward and board level and improve patient care by supporting organisational objectives. © 2014 The author. Health Information and Libraries Journal © 2014 Health Libraries Group.

  19. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.

  20. Advanced data acquisition and display techniques for laser velocimetry

    NASA Technical Reports Server (NTRS)

    Kjelgaard, Scott O.; Weston, Robert P.

    1991-01-01

    The Basic Aerodynamics Research Tunnel (BART) has been equipped with state-of-the-art instrumentation for acquiring the data needed for code validation. This paper describes the three-component LDV and the workstation-based data-acquisition system (DAS) which has been developed for the BART. The DAS allows the use of automation and the quick integration of advanced instrumentation, while minimizing the software development time required between investigations. The paper also includes a description of a graphics software library developed to support the windowing environment of the DAS. The real-time displays generated using the graphics library help the researcher ensure the test is proceeding properly. The graphics library also supports the requirements of posttest data analysis. The use of the DAS and graphics libraries is illustrated by presenting examples of the real-time and postprocessing display graphics for LDV investigations.

  1. LEGO - A Class Library for Accelerator Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunhai

    1998-11-19

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulatemore » the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given.« less

  2. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    PubMed

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  3. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less

  4. PANEL LIBRARY AND EDITOR

    NASA Technical Reports Server (NTRS)

    Raible, E.

    1994-01-01

    The Panel Library and Editor is a graphical user interface (GUI) builder for the Silicon Graphics IRIS workstation family. The toolkit creates "widgets" which can be manipulated by the user. Its appearance is similar to that of the X-Windows System. The Panel Library is written in C and is used by programmers writing user-friendly mouse-driven applications for the IRIS. GUIs built using the Panel Library consist of "actuators" and "panels." Actuators are buttons, dials, sliders, or other mouse-driven symbols. Panels are groups of actuators that occupy separate windows on the IRIS workstation. The application user can alter variables in the graphics program, or fire off functions with a click on a button. The evolution of data values can be tracked with meters and strip charts, and dialog boxes with text processing can be built. Panels can be stored as icons when not in use. The Panel Editor is a program used to interactively create and test panel library interfaces in a simple and efficient way. The Panel Editor itself uses a panel library interface, so all actions are mouse driven. Extensive context-sensitive on-line help is provided. Programmers can graphically create and test the user interface without writing a single line of code. Once an interface is judged satisfactory, the Panel Editor will dump it out as a file of C code that can be used in an application. The Panel Library (v9.8) and Editor (v1.1) are written in C-Language (63%) and Scheme, a dialect of LISP, (37%) for Silicon Graphics 4D series workstations running IRIX 3.2 or higher. Approximately 10Mb of disk space is required once compiled. 1.5Mb of main memory is required to execute the panel editor. This program is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format for an IRIS, and includes a copy of XScheme, the public-domain Scheme interpreter used by the Panel Editor. The Panel Library Programmer's Manual is included on the distribution media. The Panel Library and Editor were released to COSMIC in 1991. Silicon Graphics, IRIS, and IRIX are trademarks of Silicon Graphics, Inc. X-Window System is a trademark of Massachusetts Institute of Technology.

  5. Benchmark calculation for radioactivity inventory using MAXS library based on JENDL-4.0 and JEFF-3.0/A for decommissioning BWR plants

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken-ichi

    2016-06-01

    We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV) of a Boiling Water Reactor (BWR) by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au) and Nickel (Ni) at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.

  6. Generation of human scFv antibody libraries: PCR amplification and assembly of light- and heavy-chain coding sequences.

    PubMed

    Andris-Widhopf, Jennifer; Steinberger, Peter; Fuller, Roberta; Rader, Christoph; Barbas, Carlos F

    2011-09-01

    The development of therapeutic antibodies for use in the treatment of human diseases has long been a goal for many researchers in the antibody field. One way to obtain these antibodies is through phage-display libraries constructed from human lymphocytes. This protocol describes the construction of human scFv (single chain antibody fragment) libraries using a short linker (GGSSRSS) or a long linker (GGSSRSSSSGGGGSGGGG). In this method, the individual rearranged heavy- and light-chain variable regions are amplified separately and are linked through a series of overlap polymerase chain reaction (PCR) steps to give the final scFv products that are used for cloning.

  7. Validation of Hansen-Roach library for highly enriched uranium metal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenz, T.R.; Busch, R.D.

    The Hansen-Roach 16-group cross-section library has been validated for use in pure uranium metal systems by modeling the Godiva critical assembly using the neutronics transport theory code ONEDANT to perform effective multiplication factor (k{sub eff}) calculations. The cross-section library used contains data for 118 isotopes (34 unique elements), including the revised cross sections for {sup 235}U and {sup 238}U. The Godiva critical assembly is a 17.4-cm sphere composed of 93.7 wt% {sup 235}U, 1.0 wt% {sup 234}U, and 5.3 wt% {sup 238}U with an effective homogeneous density of 18.7 g/cm{sup 3}.

  8. Pteros: fast and easy to use open-source C++ library for molecular analysis.

    PubMed

    Yesylevskyy, Semen O

    2012-07-15

    An open-source Pteros library for molecular modeling and analysis of molecular dynamics trajectories for C++ programming language is introduced. Pteros provides a number of routine analysis operations ranging from reading and writing trajectory files and geometry transformations to structural alignment and computation of nonbonded interaction energies. The library features asynchronous trajectory reading and parallel execution of several analysis routines, which greatly simplifies development of computationally intensive trajectory analysis algorithms. Pteros programming interface is very simple and intuitive while the source code is well documented and easily extendible. Pteros is available for free under open-source Artistic License from http://sourceforge.net/projects/pteros/. Copyright © 2012 Wiley Periodicals, Inc.

  9. A portable MPI-based parallel vector template library

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.

    1995-01-01

    This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.

  10. A Portable MPI-Based Parallel Vector Template Library

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.

    1995-01-01

    This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.

  11. Comparative Studies on UO2 Fueled HTTR Several Nuclear Data Libraries

    NASA Astrophysics Data System (ADS)

    Hidayati, Anni N.; Prastyo, Puguh A.; Waris, Abdul; Irwanto, Dwi

    2017-07-01

    HTTR (High Temperature Engineering Test Reactor) is one of Generation IV nuclear reactors that has been developed by JAERI (former name of JAEA, JAPAN). HTTR uses graphite moderator, helium gas coolant with UO2 fuel and outlet coolant temperature of 900°C or higher than that. Several studies regarding HTTR have been performed by employing JENDL 3.2 nuclear data libraries. In this paper, comparative evaluation of HTTR with several nuclear data libraries (JENDL 3.3, JENDL 4.0, and JEF 3.1) have been conducted.. The 3-D calculation was performed by using CITATION module of SRAC 2006 code. The result shows some differences between those nuclear data libraries result. K-eff or core effective multiplication factor results are about 1.17, 1,18 and 1,19 (JENDL 3.3, JENDL 4.0, and JEF 3.1) at Begin of Life, also at the End of Life (after two years operation) are 1.16, 1.17 and 1.17 for each nuclear data libraries. There are some different result of K-eff but for neutron spectra results, those nuclear data libraries show the same result.

  12. Evaluation of vector-primed cDNA library production from microgram quantities of total RNA.

    PubMed

    Kuo, Jonathan; Inman, Jason; Brownstein, Michael; Usdin, Ted B

    2004-12-15

    cDNA sequences are important for defining the coding region of genes, and full-length cDNA clones have proven to be useful for investigation of the function of gene products. We produced cDNA libraries containing 3.5-5 x 10(5) primary transformants, starting with 5 mug of total RNA prepared from mouse pituitary, adrenal, thymus, and pineal tissue, using a vector-primed cDNA synthesis method. Of approximately 1000 clones sequenced, approximately 20% contained the full open reading frames (ORFs) of known transcripts, based on the presence of the initiating methionine residue codon. The libraries were complex, with 94, 91, 83 and 55% of the clones from the thymus, adrenal, pineal and pituitary libraries, respectively, represented only once. Twenty-five full-length clones, not yet represented in the Mammalian Gene Collection, were identified. Thus, we have produced useful cDNA libraries for the isolation of full-length cDNA clones that are not yet available in the public domain, and demonstrated the utility of a simple method for making high-quality libraries from small amounts of starting material.

  13. Military Interoperable Digital Hospital Testbed (MIDHT)

    DTIC Science & Technology

    2013-10-01

    activities are selected highlights completed by Northrop Grumman during the year. Cycle 4 development: - Increased the max_allowed_packet size in MySQL ...deployment with the Java install that is required by CONNECT v3.3.1.3. - Updated the MIDHT code base to work with the CONNECT v.3.3.1.3 Core Libraries...Provided TATRC the CONNECTUniversalClientGUI binaries for use with CONNECT v3.3.1.3 − Created and deployed a common Java library for the CONNECT

  14. New member of the hormone-sensitive lipase family from the permafrost microbial community.

    PubMed

    Petrovskaya, Lada E; Novototskaya-Vlasova, Ksenia A; Gapizov, Sultan Sh; Spirina, Elena V; Durdenko, Ekaterina V; Rivkina, Elizaveta M

    2017-07-04

    Siberian permafrost is a unique environment inhabited with diverse groups of microorganisms. Among them, there are numerous producers of biotechnologically relevant enzymes including lipases and esterases. Recently, we have constructed a metagenomic library from a permafrost sample and identified in it several genes coding for potential lipolytic enzymes. In the current work, properties of the recombinant esterases obtained from this library are compared with the previously characterized lipase from Psychrobacter cryohalolentis and other representatives of the hormone-sensitive lipase family.

  15. CASMO5 JENDL-4.0 and ENDF/B-VII.1beta4 libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, J.; Gheorghiu, N.; Ferrer, R.

    2012-07-01

    This paper details the generation of neutron data libraries for the CASMO5 lattice physics code based on the recently released JENDL-4.0 and ENDF/B-VII.1beta4 nuclear data evaluations. This data represents state-of-the-art nuclear data for late-2011. The key features of the new evaluations are briefly described along with the procedure for processing of this data into CASMO5, 586-energy group neutron data libraries. Finally some CASMO5 results for standard UO{sub 2} and MOX critical experiments for the two new libraries and the current ENDF/B-VII.0 CASMO5 library are presented including the B and W 1810 series, DIMPLE S06A, S06B, TCA reflector criticals with ironmore » plates and the PNL-30-35 MOX criticals. The results show that CASMO5 with the new libraries is performing well for these criticals with a very slight edge in results to the JENDL-4.0 nuclear data evaluation over the ENDF/B-VII.1beta4 evaluation. Work is currently underway to generate a CASMO5 library based on the final ENDF/B-VII.R1 evaluation released Dec. 22, 2011. (authors)« less

  16. Preparation of highly multiplexed small RNA sequencing libraries.

    PubMed

    Persson, Helena; Søkilde, Rolf; Pirona, Anna Chiara; Rovira, Carlos

    2017-08-01

    MicroRNAs (miRNAs) are ~22-nucleotide-long small non-coding RNAs that regulate the expression of protein-coding genes by base pairing to partially complementary target sites, preferentially located in the 3´ untranslated region (UTR) of target mRNAs. The expression and function of miRNAs have been extensively studied in human disease, as well as the possibility of using these molecules as biomarkers for prognostication and treatment guidance. To identify and validate miRNAs as biomarkers, their expression must be screened in large collections of patient samples. Here, we develop a scalable protocol for the rapid and economical preparation of a large number of small RNA sequencing libraries using dual indexing for multiplexing. Combined with the use of off-the-shelf reagents, more samples can be sequenced simultaneously on large-scale sequencing platforms at a considerably lower cost per sample. Sample preparation is simplified by pooling libraries prior to gel purification, which allows for the selection of a narrow size range while minimizing sample variation. A comparison with publicly available data from benchmarking of miRNA analysis platforms showed that this method captures absolute and differential expression as effectively as commercially available alternatives.

  17. A norming study and library of 203 dance movements.

    PubMed

    Christensen, Julia F; Nadal, Marcos; Cela-Conde, Camilo José

    2014-01-01

    Dance stimuli have been used in experimental studies of (i) how movement is processed in the brain; (ii) how affect is perceived from bodily movement; and (iii) how dance can be a source of aesthetic experience. However, stimulus materials across--and even within--these three domains of research have varied considerably. Thus, integrative conclusions remain elusive. Moreover, concerns have been raised that the movements selected for such stimuli are qualitatively too different from the actual art form dance, potentially introducing noise in the data. We propose a library of dance stimuli which responds to the stimuli requirements and design criteria of these three areas of research, while at the same time respecting a dance art-historical perspective, offering greater ecological validity as compared with previous dance stimulus sets. The stimuli are 5-6 s long video clips, selected from genuine ballet performances. Following a number of coding experiments, the resulting stimulus library comprises 203 ballet dance stimuli coded in (i) 25 qualitative and quantitative movement variables; (ii) affective valence and arousal; and (iii) the aesthetic qualities beauty, liking, and interest. An Excel spreadsheet with these data points accompanies this manuscript, and the stimuli can be obtained from the authors upon request.

  18. Uni10: an open-source library for tensor network algorithms

    NASA Astrophysics Data System (ADS)

    Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung

    2015-09-01

    We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.

  19. Next-generation libraries for robust RNA interference-based genome-wide screens

    PubMed Central

    Kampmann, Martin; Horlbeck, Max A.; Chen, Yuwen; Tsai, Jordan C.; Bassik, Michael C.; Gilbert, Luke A.; Villalta, Jacqueline E.; Kwon, S. Chul; Chang, Hyeshik; Kim, V. Narry; Weissman, Jonathan S.

    2015-01-01

    Genetic screening based on loss-of-function phenotypes is a powerful discovery tool in biology. Although the recent development of clustered regularly interspaced short palindromic repeats (CRISPR)-based screening approaches in mammalian cell culture has enormous potential, RNA interference (RNAi)-based screening remains the method of choice in several biological contexts. We previously demonstrated that ultracomplex pooled short-hairpin RNA (shRNA) libraries can largely overcome the problem of RNAi off-target effects in genome-wide screens. Here, we systematically optimize several aspects of our shRNA library, including the promoter and microRNA context for shRNA expression, selection of guide strands, and features relevant for postscreen sample preparation for deep sequencing. We present next-generation high-complexity libraries targeting human and mouse protein-coding genes, which we grouped into 12 sublibraries based on biological function. A pilot screen suggests that our next-generation RNAi library performs comparably to current CRISPR interference (CRISPRi)-based approaches and can yield complementary results with high sensitivity and high specificity. PMID:26080438

  20. Raster graphics display library

    NASA Technical Reports Server (NTRS)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  1. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  2. FIEStool: Automated data reduction for FIber-fed Echelle Spectrograph (FIES)

    NASA Astrophysics Data System (ADS)

    Stempels, Eric; Telting, John

    2017-08-01

    FIEStool automatically reduces data obtained with the FIber-fed Echelle Spectrograph (FIES) at the Nordic Optical Telescope, a high-resolution spectrograph available on a stand-by basis, while also allowing the basic properties of the reduction to be controlled in real time by the user. It provides a Graphical User Interface and offers bias subtraction, flat-fielding, scattered-light subtraction, and specialized reduction tasks from the external packages IRAF (ascl:9911.002) and NumArray. The core of FIEStool is instrument-independent; the software, written in Python, could with minor modifications also be used for automatic reduction of data from other instruments.

  3. Performance of MCNP4A on seven computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-12-31

    The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less

  4. The ENSDF Java Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonzogni, A.A.

    2005-05-24

    A package of computer codes has been developed to process and display nuclear structure and decay data stored in the ENSDF (Evaluated Nuclear Structure Data File) library. The codes were written in an object-oriented fashion using the java language. This allows for an easy implementation across multiple platforms as well as deployment on web pages. The structure of the different java classes that make up the package is discussed as well as several different implementations.

  5. Carbon Nanotube Growth Rate Regression using Support Vector Machines and Artificial Neural Networks

    DTIC Science & Technology

    2014-03-27

    intensity D peak. Reprinted with permission from [38]. The SVM classifier is trained using custom written Java code leveraging the Sequential Minimal...Society Encog is a machine learning framework for Java , C++ and .Net applications that supports Bayesian Networks, Hidden Markov Models, SVMs and ANNs [13...SVM classifiers are trained using Weka libraries and leveraging custom written Java code. The data set is created as an Attribute Relationship File

  6. Arabic Natural Language Processing System Code Library

    DTIC Science & Technology

    2014-06-01

    Code Compilation 2 4. Training Instructions 2 5. Applying the System to New Examples 2 6. License 3 7. History 3 8. Important Note 4 9. Papers to...a slightly different English dependency scheme and contained a variety of improvements. However, the PropBank-style SRL module was not maintained...than those in the http://sourceforge.net/projects/miacp/ release.) 8. Important Note This release contains a variety of bug fixes and other generally

  7. Wilson and Domainwall Kernels on Oakforest-PACS

    NASA Astrophysics Data System (ADS)

    Kanamori, Issaku; Matsufuru, Hideo

    2018-03-01

    We report the performance of Wilson and Domainwall Kernels on a new Intel Xeon Phi Knights Landing based machine named Oakforest-PACS, which is co-hosted by University of Tokyo and Tsukuba University and is currently fastest in Japan. This machine uses Intel Omni-Path for the internode network. We compare performance with several types of implementation including that makes use of the Grid library. The code is incorporated with the code set Bridge++.

  8. Combining Open-Source Packages for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Grieger, Björn; Völk, Stefan

    2015-04-01

    The science planning of the ESA Rosetta mission has presented challenges which were addressed with combining various open-source software packages, such as the SPICE toolkit, the Python language and the Web graphics library three.js. The challenge was to compute certain parameters from a pool of trajectories and (possible) attitudes to describe the behaviour of the spacecraft. To be able to do this declaratively and efficiently, a C library was implemented that allows to interface the SPICE toolkit for geometrical computations from the Python language and process as much data as possible during one subroutine call. To minimise the lines of code one has to write special care was taken to ensure that the bindings were idiomatic and thus integrate well into the Python language and ecosystem. When done well, this very much simplifies the structure of the code and facilitates the testing for correctness by automatic test suites and visual inspections. For rapid visualisation and confirmation of correctness of results, the geometries were visualised with the three.js library, a popular Javascript library for displaying three-dimensional graphics in a Web browser. Programmatically, this was achieved by generating data files from SPICE sources that were included into templated HTML and displayed by a browser, thus made easily accessible to interested parties at large. As feedback came and new ideas were to be explored, the authors benefited greatly from the design of the Python-to-SPICE library which allowed the expression of algorithms to be concise and easier to communicate. In summary, by combining several well-established open-source tools, we were able to put together a flexible computation and visualisation environment that helped communicate and build confidence in planning ideas.

  9. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  10. Umbra (core)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.

    Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.

  11. Rapid Assessment of Agility for Conceptual Design Synthesis

    NASA Technical Reports Server (NTRS)

    Biezad, Daniel J.

    1996-01-01

    This project consists of designing and implementing a real-time graphical interface for a workstation-based flight simulator. It is capable of creating a three-dimensional out-the-window scene of the aircraft's flying environment, with extensive information about the aircraft's state displayed in the form of a heads-up-display (HUD) overlay. The code, written in the C programming language, makes calls to Silicon Graphics' Graphics Library (GL) to draw the graphics primitives. Included in this report is a detailed description of the capabilities of the code, including graphical examples, as well as a printout of the code itself

  12. Precision Stellar Characterization of FGKM Stars using an Empirical Spectral Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Samuel W.; Petigura, Erik A.; Von Braun, Kaspar, E-mail: syee@caltech.edu

    Classification of stars, by comparing their optical spectra to a few dozen spectral standards, has been a workhorse of observational astronomy for more than a century. Here, we extend this technique by compiling a library of optical spectra of 404 touchstone stars observed with Keck/HIRES by the California Planet Search. The spectra have high resolution ( R ≈ 60,000), high signal-to-noise ratio (S/N ≈ 150/pixel), and are registered onto a common wavelength scale. The library stars have properties derived from interferometry, asteroseismology, LTE spectral synthesis, and spectrophotometry. To address a lack of well-characterized late-K dwarfs in the literature, we measuremore » stellar radii and temperatures for 23 nearby K dwarfs, using modeling of the spectral energy distribution and Gaia parallaxes. This library represents a uniform data set spanning the spectral types ∼M5–F1 ( T {sub eff} ≈ 3000–7000 K, R {sub ⋆} ≈ 0.1–16 R {sub ⊙}). We also present “Empirical SpecMatch” (SpecMatch-Emp), a tool for parameterizing unknown spectra by comparing them against our spectral library. For FGKM stars, SpecMatch-Emp achieves accuracies of 100 K in effective temperature ( T {sub eff}), 15% in stellar radius ( R {sub ⋆}), and 0.09 dex in metallicity ([Fe/H]). Because the code relies on empirical spectra it performs particularly well for stars ∼K4 and later, which are challenging to model with existing spectral synthesizers, reaching accuracies of 70 K in T {sub eff}, 10% in R {sub ⋆}, and 0.12 dex in [Fe/H]. We also validate the performance of SpecMatch-Emp, finding it to be robust at lower spectral resolution and S/N, enabling the characterization of faint late-type stars. Both the library and stellar characterization code are publicly available.« less

  13. New H-band Stellar Spectral Libraries for the SDSS-III/APOGEE Survey

    NASA Astrophysics Data System (ADS)

    Zamora, O.; García-Hernández, D. A.; Allende Prieto, C.; Carrera, R.; Koesterke, L.; Edvardsson, B.; Castelli, F.; Plez, B.; Bizyaev, D.; Cunha, K.; García Pérez, A. E.; Gustafsson, B.; Holtzman, J. A.; Lawler, J. E.; Majewski, S. R.; Manchado, A.; Mészáros, Sz.; Shane, N.; Shetrone, M.; Smith, V. V.; Zasowski, G.

    2015-06-01

    The Sloan Digital Sky Survey-III (SDSS-III) Apache Point Observatory Galactic Evolution Experiment (APOGEE) has obtained high-resolution (R ˜ 22,500), high signal-to-noise ratio (\\gt 100) spectra in the H-band (˜1.5-1.7 μm) for about 146,000 stars in the Milky Way galaxy. We have computed spectral libraries with effective temperature ({{T}eff}) ranging from 3500 to 8000 K for the automated chemical analysis of the survey data. The libraries, used to derive stellar parameters and abundances from the APOGEE spectra in the SDSS-III data release 12 (DR12), are based on ATLAS9 model atmospheres and the ASSɛT spectral synthesis code. We present a second set of libraries based on MARCS model atmospheres and the spectral synthesis code Turbospectrum. The ATLAS9/ASSɛT ({{T}eff} = 3500-8000 K) and MARCS/Turbospectrum ({{T}eff} = 3500-5500 K) grids cover a wide range of metallicity (-2.5 ≤slant [M/H] ≤slant +0.5 dex), surface gravity (0 ≤ log g ≤slant 5 dex), microturbulence (0.5 ≤slant ξ ≤slant 8 km s-1), carbon (-1 ≤slant [C/M] ≤slant +1 dex), nitrogen (-1 ≤slant [N/M] ≤slant +1 dex), and α-element (-1 ≤slant [α/M] ≤slant +1 dex) variations, having thus seven dimensions. We compare the ATLAS9/ASSɛT and MARCS/Turbospectrum libraries and apply both of them to the analysis of the observed H-band spectra of the Sun and the K2 giant Arcturus, as well as to a selected sample of well-known giant stars observed at very high resolution. The new APOGEE libraries are publicly available and can be employed for chemical studies in the H-band using other high-resolution spectrographs.

  14. Construction and characterization of normalized cDNA libraries by 454 pyrosequencing and estimation of DNA methylation levels in three distantly related termite species.

    PubMed

    Hayashi, Yoshinobu; Shigenobu, Shuji; Watanabe, Dai; Toga, Kouhei; Saiki, Ryota; Shimada, Keisuke; Bourguignon, Thomas; Lo, Nathan; Hojo, Masaru; Maekawa, Kiyoto; Miura, Toru

    2013-01-01

    In termites, division of labor among castes, categories of individuals that perform specialized tasks, increases colony-level productivity and is the key to their ecological success. Although molecular studies on caste polymorphism have been performed in termites, we are far from a comprehensive understanding of the molecular basis of this phenomenon. To facilitate future molecular studies, we aimed to construct expressed sequence tag (EST) libraries covering wide ranges of gene repertoires in three representative termite species, Hodotermopsis sjostedti, Reticulitermes speratus and Nasutitermes takasagoensis. We generated normalized cDNA libraries from whole bodies, except for guts containing microbes, of almost all castes, sexes and developmental stages and sequenced them with the 454 GS FLX titanium system. We obtained >1.2 million quality-filtered reads yielding >400 million bases for each of the three species. Isotigs, which are analogous to individual transcripts, and singletons were produced by assembling the reads and annotated using public databases. Genes related to juvenile hormone, which plays crucial roles in caste differentiation of termites, were identified from the EST libraries by BLAST search. To explore the potential for DNA methylation, which plays an important role in caste differentiation of honeybees, tBLASTn searches for DNA methyltransferases (dnmt1, dnmt2 and dnmt3) and methyl-CpG binding domain (mbd) were performed against the EST libraries. All four of these genes were found in the H. sjostedti library, while all except dnmt3 were found in R. speratus and N. takasagoensis. The ratio of the observed to the expected CpG content (CpG O/E), which is a proxy for DNA methylation level, was calculated for the coding sequences predicted from the isotigs and singletons. In all of the three species, the majority of coding sequences showed depletion of CpG O/E (less than 1), and the distributions of CpG O/E were bimodal, suggesting the presence of DNA methylation.

  15. Construction and Characterization of Normalized cDNA Libraries by 454 Pyrosequencing and Estimation of DNA Methylation Levels in Three Distantly Related Termite Species

    PubMed Central

    Hayashi, Yoshinobu; Shigenobu, Shuji; Watanabe, Dai; Toga, Kouhei; Saiki, Ryota; Shimada, Keisuke; Bourguignon, Thomas; Lo, Nathan; Hojo, Masaru; Maekawa, Kiyoto; Miura, Toru

    2013-01-01

    In termites, division of labor among castes, categories of individuals that perform specialized tasks, increases colony-level productivity and is the key to their ecological success. Although molecular studies on caste polymorphism have been performed in termites, we are far from a comprehensive understanding of the molecular basis of this phenomenon. To facilitate future molecular studies, we aimed to construct expressed sequence tag (EST) libraries covering wide ranges of gene repertoires in three representative termite species, Hodotermopsis sjostedti , Reticulitermessperatus and Nasutitermestakasagoensis . We generated normalized cDNA libraries from whole bodies, except for guts containing microbes, of almost all castes, sexes and developmental stages and sequenced them with the 454 GS FLX titanium system. We obtained >1.2 million quality-filtered reads yielding >400 million bases for each of the three species. Isotigs, which are analogous to individual transcripts, and singletons were produced by assembling the reads and annotated using public databases. Genes related to juvenile hormone, which plays crucial roles in caste differentiation of termites, were identified from the EST libraries by BLAST search. To explore the potential for DNA methylation, which plays an important role in caste differentiation of honeybees, tBLASTn searches for DNA methyltransferases (dnmt1, dnmt2 and dnmt3) and methyl-CpG binding domain (mbd) were performed against the EST libraries. All four of these genes were found in the H . sjostedti library, while all except dnmt3 were found in R . speratus and N . takasagoensis . The ratio of the observed to the expected CpG content (CpG O/E), which is a proxy for DNA methylation level, was calculated for the coding sequences predicted from the isotigs and singletons. In all of the three species, the majority of coding sequences showed depletion of CpG O/E (less than 1), and the distributions of CpG O/E were bimodal, suggesting the presence of DNA methylation. PMID:24098800

  16. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  17. Preparation macroconstants to simulate the core of VVER-1000 reactor

    NASA Astrophysics Data System (ADS)

    Seleznev, V. Y.

    2017-01-01

    Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.

  18. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  19. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  20. VVER-440 and VVER-1000 reactor dosimetry benchmark - BUGLE-96 versus ALPAN VII.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duo, J. I.

    2011-07-01

    Document available in abstract form only, full text of document follows: Analytical results of the vodo-vodyanoi energetichesky reactor-(VVER-) 440 and VVER-1000 reactor dosimetry benchmarks developed from engineering mockups at the Nuclear Research Inst. Rez LR-0 reactor are discussed. These benchmarks provide accurate determination of radiation field parameters in the vicinity and over the thickness of the reactor pressure vessel. Measurements are compared to calculated results with two sets of tools: TORT discrete ordinates code and BUGLE-96 cross-section library versus the newly Westinghouse-developed RAPTOR-M3G and ALPAN VII.0. The parallel code RAPTOR-M3G enables detailed neutron distributions in energy and space in reducedmore » computational time. ALPAN VII.0 cross-section library is based on ENDF/B-VII.0 and is designed for reactor dosimetry applications. It uses a unique broad group structure to enhance resolution in thermal-neutron-energy range compared to other analogous libraries. The comparison of fast neutron (E > 0.5 MeV) results shows good agreement (within 10%) between BUGLE-96 and ALPAN VII.O libraries. Furthermore, the results compare well with analogous results of participants of the REDOS program (2005). Finally, the analytical results for fast neutrons agree within 15% with the measurements, for most locations in all three mockups. In general, however, the analytical results underestimate the attenuation through the reactor pressure vessel thickness compared to the measurements. (authors)« less

Top