Science.gov

Sample records for parallel pcg package

  1. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  2. PCG: A software package for the iterative solution of linear systems on scalar, vector and parallel computers

    SciTech Connect

    Joubert, W.; Carey, G.F.

    1994-12-31

    A great need exists for high performance numerical software libraries transportable across parallel machines. This talk concerns the PCG package, which solves systems of linear equations by iterative methods on parallel computers. The features of the package are discussed, as well as techniques used to obtain high performance as well as transportability across architectures. Representative numerical results are presented for several machines including the Connection Machine CM-5, Intel Paragon and Cray T3D parallel computers.

  3. PCG reference manual: A package for the iterative solution of large sparse linear systems on parallel computers. Version 1.0

    SciTech Connect

    Joubert, W.D.; Carey, G.F.; Kohli, H.; Lorber, A.; McLay, R.T.; Shen, Y.; Berner, N.A. |; Kalhan, A. |

    1995-01-01

    PCG (Preconditioned Conjugate Gradient package) is a system for solving linear equations of the form Au = b, for A a given matrix and b and u vectors. PCG, employing various gradient-type iterative methods coupled with preconditioners, is designed for general linear systems, with emphasis on sparse systems such as these arising from discretization of partial differential equations arising from physical applications. It can be used to solve linear equations efficiently on parallel computer architectures. Much of the code is reusable across architectures and the package is portable across different systems; the machines that are currently supported is listed. This manual is intended to be the general-purpose reference describing all features of the package accessible to the user; suggestions are also given regarding which methods to use for a given problem.

  4. A parallel PCG solver for MODFLOW.

    PubMed

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. PMID:19563427

  5. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha Anne.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica L.

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  6. Hybrid Optimization Parallel Search PACKage

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  7. High density packaging and interconnect of massively parallel image processors

    NASA Technical Reports Server (NTRS)

    Carson, John C.; Indin, Ronald J.

    1991-01-01

    This paper presents conceptual designs for high density packaging of parallel processing systems. The systems fall into two categories: global memory systems where many processors are packaged into a stack, and distributed memory systems where a single processor and many memory chips are packaged into a stack. Thermal behavior and performance are discussed.

  8. AZTEC: A parallel iterative package for the solving linear systems

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  9. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  10. JPARSS: A Java Parallel Network Package for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

  11. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  12. (PCG) Protein Crystal Growth Canavalin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Canavalin. The major storage protein of leguminous plants and a major source of dietary protein for humans and domestic animals. It is studied in efforts to enhance nutritional value of proteins through protein engineerings. It is isolated from Jack Bean because of it's potential as a nutritional substance. Principal Investigator on STS-26 was Alex McPherson.

  13. A Parallel Teaching Package for Special Education/Industrial Arts.

    ERIC Educational Resources Information Center

    Lenti, Donna M., Comp.; And Others

    This teaching package presents information and materials for use by special and industrial arts educators in teaching learning-disabled students. It may also be of use to guidance counselors and administrators for student counseling and placement. The package is comprised of two primary units. Unit 1 overviews the field of learning disabilities to…

  14. VisIt: a component based parallel visualization package

    SciTech Connect

    Ahern, S; Bonnell, K; Brugger, E; Childs, H; Meredith, J; Whitlock, B

    2000-12-18

    We are currently developing a component based, parallel visualization and graphical analysis tool for visualizing and analyzing data on two- and three-dimensional (20, 30) meshes. The tool consists of three primary components: a graphical user interface (GUI), a viewer, and a parallel compute engine. The components are designed to be operated in a distributed fashion with the GUI and viewer typically running on a high performance visualization server and the compute engine running on a large parallel platform. The viewer and compute engine are both based on the Visualization Toolkit (VTK), an open source object oriented data manipulation and visualization library. The compute engine will make use of parallel extensions to VTK, based on MPI, developed by Los Alamos National Laboratory in collaboration with the originators of P K . The compute engine will make use of meta-data so that it only operates on the portions of the data necessary to generate the image. The meta-data can either be created as the post-processing data is generated or as a pre-processing step to using VisIt. VisIt will be integrated with the VIEWS' Tera-Scale Browser, which will provide a high performance visual data browsing capability based on multi-resolution techniques.

  15. (PCG) Protein Crystal Growth Porcine Elastase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Porcine Elastase. This enzyme is associated with the degradation of lung tissue in people suffering from emphysema. It is useful in studying causes of this disease. Principal Investigator on STS-26 was Charles Bugg.

  16. penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE

    SciTech Connect

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  17. Cleanup Verification Package for the 100-F-20, Pacific Northwest Laboratory Parallel Pits

    SciTech Connect

    M. J. Appel

    2007-01-22

    This cleanup verification package documents completion of remedial action for the 100-F-20, Pacific Northwest Laboratory Parallel Pits waste site. This waste site consisted of two earthen trenches thought to have received both radioactive and nonradioactive material related to the 100-F Experimental Animal Farm.

  18. ChIP-seq Data Processing for PcG Proteins and Associated Histone Modifications.

    PubMed

    Bogdanovic, Ozren; van Heeringen, Simon J

    2016-01-01

    Chromatin Immunoprecipitation followed by massively parallel DNA sequencing (ChIP-sequencing) has emerged as an essential technique to study the genome-wide location of DNA- or chromatin-associated proteins, such as the Polycomb group (PcG) proteins. After being generated by the sequencer, raw ChIP-seq sequence reads need to be processed by a data analysis pipeline. Here we describe the computational steps required to process PcG ChIP-seq data, including alignment, peak calling, and downstream analysis. PMID:27659973

  19. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  20. Optimization of a parallel permutation testing function for the SPRINT R package.

    PubMed

    Petrou, Savvas; Sloan, Terence M; Mewissen, Muriel; Forster, Thorsten; Piotrowski, Michal; Dobrzelecki, Bartosz; Ghazal, Peter; Trew, Arthur; Hill, Jon

    2011-12-10

    The statistical language R and its Bioconductor package are favoured by many biostatisticians for processing microarray data. The amount of data produced by some analyses has reached the limits of many common bioinformatics computing infrastructures. High Performance Computing systems offer a solution to this issue. The Simple Parallel R Interface (SPRINT) is a package that provides biostatisticians with easy access to High Performance Computing systems and allows the addition of parallelized functions to R. Previous work has established that the SPRINT implementation of an R permutation testing function has close to optimal scaling on up to 512 processors on a supercomputer. Access to supercomputers, however, is not always possible, and so the work presented here compares the performance of the SPRINT implementation on a supercomputer with benchmarks on a range of platforms including cloud resources and a common desktop machine with multiprocessing capabilities. Copyright © 2011 John Wiley & Sons, Ltd. PMID:23335858

  1. (PCG) Protein Crystal Growth Isocitrate Lyase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Isocitrate Lyase. Target enzyme for fungicides. A better understanding of this enzyme should lead to the discovery of more potent fungicides to treat serious crop diseases such as rice blast. It regulates the flow of metabolic intermediates required for cell growth. Principal Investigator for STS-26 was Charles Bugg.

  2. (PCG) Protein Crystal Growth Isocitrate Lysase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Isocitrate Lysase. Target enzyme for fungicides. A better understanding of this enzyme should lead to the discovery of more potent fungicides to treat serious crop diseases such as rice blast. It regulates the flow of metabolic intermediates required for cell growth. Principal Investigator on STS-26 was Charles Bugg.

  3. affyPara-a Bioconductor Package for Parallelized Preprocessing Algorithms of Affymetrix Microarray Data.

    PubMed

    Schmidberger, Markus; Vicedo, Esmeralda; Mansmann, Ulrich

    2009-07-22

    Microarray data repositories as well as large clinical applications of gene expression allow to analyse several hundreds of microarrays at one time. The preprocessing of large amounts of microarrays is still a challenge. The algorithms are limited by the available computer hardware. For example, building classification or prognostic rules from large microarray sets will be very time consuming. Here, preprocessing has to be a part of the cross-validation and resampling strategy which is necessary to estimate the rule's prediction quality honestly.This paper proposes the new Bioconductor package affyPara for parallelized preprocessing of Affymetrix microarray data. Partition of data can be applied on arrays and parallelization of algorithms is a straightforward consequence. The partition of data and distribution to several nodes solves the main memory problems and accelerates preprocessing by up to the factor 20 for 200 or more arrays.affyPara is a free and open source package, under GPL license, available form the Bioconductor project at www.bioconductor.org. A user guide and examples are provided with the package.

  4. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    PubMed

    Miroshnikov, Alexey; Conlon, Erin M

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  5. (PCG) Protein Crystal Growth Gamma-Interferon

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Gamma-Interferon. Stimulates the body's immune system and is used clinically in the treatment of cancer. Potential as an anti-tumor agent against solid tumors as well as leukemia's and lymphomas. It has additional utility as an anti-ineffective agent, including antiviral, anti-bacterial, and anti-parasitic activities. Principal Investigator on STS-26 was Charles Bugg.

  6. (PCG) Protein Crystal Growth Human Serum Albumin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Human Serum Albumin. Contributes to many transport and regulatory processes and has multifunctional binding properties which range from various metals, to fatty acids, hormones, and a wide spectrum of therapeutic drugs. The most abundant protein of the circulatory system. It binds and transports an incredible variety of biological and pharmaceutical ligands throughout the blood stream. Principal Investigator on STS-26 was Larry DeLucas.

  7. Induction signatures at 67P/CG

    NASA Astrophysics Data System (ADS)

    Constantinescu, Dragos; Heinisch, Philip; Auster, Uli; Richter, Ingo; Przyklenk, Anita; Glassmeier, Karl-Heinz

    2016-04-01

    The Philae landing on the nucleus of Churiomov-Gerasimenko (67P/CG) opens up the opportunity to derive the electrical properties of the comet nucleus by taking advantage of simultaneous measurements done by Philae on the surface and by Rosetta away from the nucleus. This allows the separation of the induced part of the electromagnetic field, which carries information about the electrical conductivity distribution inside the cometary nucleus. Using the transfer function and the phase difference between the magnetic field at the nucleus surface and the magnetic field measured on orbit, we give a lower bound estimate for the mean electrical conductivity of the Churiumov-Gerasimenko nucleus.

  8. 3-D readout-electronics packaging for high-bandwidth massively paralleled imager

    DOEpatents

    Kwiatkowski, Kris; Lyke, James

    2007-12-18

    Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.

  9. Parallel distributed free-space optoelectronic computer engine using flat plug-on-top optics package

    NASA Astrophysics Data System (ADS)

    Berger, Christoph; Ekman, Jeremy T.; Wang, Xiaoqing; Marchand, Philippe J.; Spaanenburg, Henk; Kiamilev, Fouad E.; Esener, Sadik C.

    2000-05-01

    We report about ongoing work on a free-space optical interconnect system, which will demonstrate a Fast Fourier Transformation calculation, distributed among six processor chips. Logically, the processors are arranged in two linear chains, where each element communicates optically with its nearest neighbors. Physically, the setup consists of a large motherboard, several multi-chip carrier modules, which hold the processor/driver chips and the optoelectronic chips (arrays of lasers and detectors), and several plug-on-top optics modules, which provide the optical links between the chip carrier modules. The system design tries to satisfy numerous constraints, such as compact size, potential for mass-production, suitability for large arrays (up to 1024 parallel channels), compatibility with standard electronics fabrication and packaging technology, potential for active misalignment compensation by integration MEMS technology, and suitability for testing different imaging topologies. We present the system architecture together with details of key components and modules, and report on first experiences with prototype modules of the setup.

  10. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  11. Chromosomal Distribution of PcG Proteins during Drosophila Development

    PubMed Central

    Nègre, Nicolas; Hennetin, Jérôme; Sun, Ling V; Lavrov, Sergey; Bellis, Michel; White, Kevin P

    2006-01-01

    Polycomb group (PcG) proteins are able to maintain the memory of silent transcriptional states of homeotic genes throughout development. In Drosophila, they form multimeric complexes that bind to specific DNA regulatory elements named PcG response elements (PREs). To date, few PREs have been identified and the chromosomal distribution of PcG proteins during development is unknown. We used chromatin immunoprecipitation (ChIP) with genomic tiling path microarrays to analyze the binding profile of the PcG proteins Polycomb (PC) and Polyhomeotic (PH) across 10 Mb of euchromatin. We also analyzed the distribution of GAGA factor (GAF), a sequence-specific DNA binding protein that is found at most previously identified PREs. Our data show that PC and PH often bind to clustered regions within large loci that encode transcription factors which play multiple roles in developmental patterning and in the regulation of cell proliferation. GAF co-localizes with PC and PH to a limited extent, suggesting that GAF is not a necessary component of chromatin at PREs. Finally, the chromosome-association profile of PC and PH changes during development, suggesting that the function of these proteins in the regulation of some of their target genes might be more dynamic than previously anticipated. PMID:16613483

  12. ADaCGH: A Parallelized Web-Based Application and R Package for the Analysis of aCGH Data

    PubMed Central

    Díaz-Uriarte, Ramón; Rueda, Oscar M.

    2007-01-01

    Background Copy number alterations (CNAs) in genomic DNA have been associated with complex human diseases, including cancer. One of the most common techniques to detect CNAs is array-based comparative genomic hybridization (aCGH). The availability of aCGH platforms and the need for identification of CNAs has resulted in a wealth of methodological studies. Methodology/Principal Findings ADaCGH is an R package and a web-based application for the analysis of aCGH data. It implements eight methods for detection of CNAs, gains and losses of genomic DNA, including all of the best performing ones from two recent reviews (CBS, GLAD, CGHseg, HMM). For improved speed, we use parallel computing (via MPI). Additional information (GO terms, PubMed citations, KEGG and Reactome pathways) is available for individual genes, and for sets of genes with altered copy numbers. Conclusions/Significance ADaCGH represents a qualitative increase in the standards of these types of applications: a) all of the best performing algorithms are included, not just one or two; b) we do not limit ourselves to providing a thin layer of CGI on top of existing BioConductor packages, but instead carefully use parallelization, examining different schemes, and are able to achieve significant decreases in user waiting time (factors up to 45×); c) we have added functionality not currently available in some methods, to adapt to recent recommendations (e.g., merging of segmentation results in wavelet-based and CGHseg algorithms); d) we incorporate redundancy, fault-tolerance and checkpointing, which are unique among web-based, parallelized applications; e) all of the code is available under open source licenses, allowing to build upon, copy, and adapt our code for other software projects. PMID:17710137

  13. Polycomb Group (PcG) Proteins and Human Cancers: Multifaceted Functions and Therapeutic Implications

    PubMed Central

    Wang, Wei; Qin, Jiang-Jiang; Voruganti, Sukesh; Nag, Subhasree; Zhou, Jianwei; Zhang, Ruiwen

    2016-01-01

    Polycomb group (PcG) proteins are transcriptional repressors that regulate several crucial developmental and physiological processes in the cell. More recently, they have been found to play important roles in human carcinogenesis and cancer development and progression. The deregulation and dysfunction of PcG proteins often lead to blocking or inappropriate activation of developmental pathways, enhancing cellular proliferation, inhibiting apoptosis, and increasing the cancer stem cell population. Genetic and molecular investigations of PcG proteins have long been focused on their PcG functions. However, PcG proteins have recently been shown to exert non-polycomb functions, contributing to the regulation of diverse cellular functions. We and others have demonstrated that PcG proteins regulate the expression and function of several oncogenes and tumor suppressor genes in a PcG-independent manner, and PcG proteins are associated with the survival of patients with cancer. In this review, we summarize the recent advances in the research on PcG proteins, including both the polycomb-repressive and non-polycomb functions. We specifically focus on the mechanisms by which PcG proteins play roles in cancer initiation, development, and progression. Finally, we discuss the potential value of PcG proteins as molecular biomarkers for the diagnosis and prognosis of cancer, and as molecular targets for cancer therapy. PMID:26227500

  14. Lamin A/C sustains PcG protein architecture, maintaining transcriptional repression at target genes

    PubMed Central

    Cesarini, Elisa; Mozzetta, Chiara; Marullo, Fabrizia; Gregoretti, Francesco; Gargiulo, Annagiusi; Columbaro, Marta; Cortesi, Alice; Antonelli, Laura; Di Pelino, Simona; Squarzoni, Stefano; Palacios, Daniela; Zippo, Alessio; Bodega, Beatrice; Oliva, Gennaro

    2015-01-01

    Beyond its role in providing structure to the nuclear envelope, lamin A/C is involved in transcriptional regulation. However, its cross talk with epigenetic factors—and how this cross talk influences physiological processes—is still unexplored. Key epigenetic regulators of development and differentiation are the Polycomb group (PcG) of proteins, organized in the nucleus as microscopically visible foci. Here, we show that lamin A/C is evolutionarily required for correct PcG protein nuclear compartmentalization. Confocal microscopy supported by new algorithms for image analysis reveals that lamin A/C knock-down leads to PcG protein foci disassembly and PcG protein dispersion. This causes detachment from chromatin and defects in PcG protein–mediated higher-order structures, thereby leading to impaired PcG protein repressive functions. Using myogenic differentiation as a model, we found that reduced levels of lamin A/C at the onset of differentiation led to an anticipation of the myogenic program because of an alteration of PcG protein–mediated transcriptional repression. Collectively, our results indicate that lamin A/C can modulate transcription through the regulation of PcG protein epigenetic factors. PMID:26553927

  15. Exclusion of primary congenital glaucoma (PCG) from two candidate regions of chromosomes 1 and 6

    SciTech Connect

    Sarfarazi, M.; Akarsu, A.N.; Barsoum-Homsy, M.

    1994-09-01

    PCG is a genetically heterogeneous condition in which a significant proportion of families inherit in an autosomally recessive fashion. Although association of PCG with chromosomal abnormalities has been repeatedly reported in the literature, the chromosomal location of this condition is still unknown. Therefore, this study is designed to identify the chromosomal location of the PCG locus by positional mapping. We have identified 80 PCG families with a total of 261 potential informative meiosis. A group of 19 pedigrees with a minimum of 2 affected children in each pedigree and consanguinity in most of the parental generation were selected as our initial screening panel. This panel consists of a total of 44 affected and 93 unaffected individuals giving a total of 99 informative meiosis, including 5 phase-known. We used polymerase chain reaction (PCR), denaturing polyacrylamide gels and silver staining to genotype our families. We first screened for markers on 1q21-q31, the reported location for juvenile primary open-angle glaucoma and excluded a region of 30 cM as the likely site for the PCG locus. Association of PCG with both ring chromosome 6 and HLA-B8 has also been reported. Therefore, we genotyped our PCG panel with PCR applicable markers from 6p21. Significant negative lod scores were obtained for D6S105 (Z = -18.70) and D6S306 (Z = -5.99) at {theta}=0.001. HLA class 1 region has also contained one of the tubulin genes (TUBB) which is an obvious candidate for PCG. Study of this gene revealed a significant negative lod score with PCG (Z = -16.74, {theta}=0.001). A multipoint linkage analysis of markers in this and other regions containing the candidate genes will be presented.

  16. Plots, Calculations and Graphics Tools (PCG2). Software Transfer Request Presentation

    NASA Technical Reports Server (NTRS)

    Richardson, Marilou R.

    2010-01-01

    This slide presentation reviews the development of the Plots, Calculations and Graphics Tools (PCG2) system. PCG2 is an easy to use tool that provides a single user interface to view data in a pictorial, tabular or graphical format. It allows the user to view the same display and data in the Control Room, engineering office area, or remote sites. PCG2 supports extensive and regular engineering needs that are both planned and unplanned and it supports the ability to compare, contrast and perform ad hoc data mining over the entire domain of a program's test data.

  17. PRECONDITIONED CONJUGATE-GRADIENT 2 (PCG2), a computer program for solving ground-water flow equations

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    This report documents PCG2 : a numerical code to be used with the U.S. Geological Survey modular three-dimensional, finite-difference, ground-water flow model . PCG2 uses the preconditioned conjugate-gradient method to solve the equations produced by the model for hydraulic head. Linear or nonlinear flow conditions may be simulated. PCG2 includes two reconditioning options : modified incomplete Cholesky preconditioning, which is efficient on scalar computers; and polynomial preconditioning, which requires less computer storage and, with modifications that depend on the computer used, is most efficient on vector computers . Convergence of the solver is determined using both head-change and residual criteria. Nonlinear problems are solved using Picard iterations. This documentation provides a description of the preconditioned conjugate gradient method and the two preconditioners, detailed instructions for linking PCG2 to the modular model, sample data inputs, a brief description of PCG2, and a FORTRAN listing.

  18. Iterative methods for the WLS state estimation on RISC, vector, and parallel computers

    SciTech Connect

    Nieplocha, J.; Carroll, C.C.

    1993-10-01

    We investigate the suitability and effectiveness of iterative methods for solving the weighted-least-square (WLS) state estimation problem on RISC, vector, and parallel processors. Several of the most popular iterative methods are tested and evaluated. The best performing preconditioned conjugate gradient (PCG) is very well suited for vector and parallel processing as is demonstrated for the WLS state estimation of the IEEE standard test systems. A new sparse matrix format for the gain matrix improves vector performance of the PCG algorithm and makes it competitive to the direct solver. Internal parallelism in RISC processors, used in current multiprocessor systems, can be taken advantage of in an implementation of this algorithm.

  19. Parallelization of Four-Component Calculations. I. Integral Generation, SCF, and Four-Index Transformation in the Dirac-Fock Package MOLFDIR.

    SciTech Connect

    Pernpointner, M.; Visscher, Lucas; De Jong, Wibe A.; Broer, R.

    2000-10-01

    The treatment of relativity and electron correlation on an equal footing is essential for the computation of systems containing heavy elements. Correlation treatments that are based on four-component Dirac-Hartree-Fock calculations presently provide the most accurate, albeit costly, way of taking relativity into account. The requirement of having two expansion basis sets for the molecular wave function puts a high demand on computer resources. The treatment of larger systems is thereby often prohibited by the very large run times and files that arise in a conventional Dirac-Hartree-Fock approach. A possible solution for this bottleneck is a parallel approach that not only reduces the turnaround time but also spreads out the large files over a number of local disks. Here, we present a distributed-memory parallelization of the program package MOLFDIR for the integral generation, Dirac-Hartree-Fock and four-index MS transformation steps. This implementation scales best for large AO spaces and moderately sized active spaces.

  20. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.

    2012-06-01

    BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods

  1. The global surface composition of 67P/CG nucleus by Rosetta/VIRTIS. (I) Prelanding mission phase

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; Tosi, Federico; De Sanctis, Maria Cristina; Erard, Stéphane; Morvan, Dominique Bockelée; Leyrat, Cedric; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Piccioni, Giuseppe; Migliorini, Alessandra; Capria, Maria Teresa; Palomba, Ernesto; Cerroni, Priscilla; Longobardo, Andrea; Barucci, Antonella; Fornasier, Sonia; Carlson, Robert W.; Jaumann, Ralf; Stephan, Katrin; Moroz, Lyuba V.; Kappel, David; Rousseau, Batiste; Fonti, Sergio; Mancarella, Francesca; Despan, Daniela; Faure, Mathilde

    2016-08-01

    . The parallel coordinates method (Inselberg [1985] Vis. Comput., 1, 69-91) has been used to identify associations between average values of the spectral indicators and the properties of the geomorphological units as defined by (Thomas et al., [2015] Science, 347, 6220) and (El-Maarr et al., [2015] Astron. Astrophys., 583, A26). Three classes have been identified (smooth/active areas, dust covered areas and depressions), which can be clustered on the basis of the 3.2 μm organic material's band depth, while consolidated terrains show a high variability of the spectral properties resulting being distributed across all three classes. These results show how the spectral variability of the nucleus surface is more variegated than the morphological classes and that 67P/CG surface properties are dynamical, changing with the heliocentric distance and with activity processes.

  2. The internal density distribution of comet 67P/C-G based on 3D models

    NASA Astrophysics Data System (ADS)

    Jorda, Laurent; Faurschou Hviid, Stubbe; Capanna, Claire; Gaskell, Robert W.; Gutiérrez, Pedro; Preusker, Frank; Scholten, Frank; Rodionov, Sergey; OSIRIS Team

    2016-10-01

    The OSIRIS camera aboard the Rosetta spacecraft observed the nucleus of comet 67P/C-G from the mapping phase in summer 2014 until now. The images have allowed the reconstruction in three-dimension of nucleus surface with stereophotogrammetry (Preusker et al., Astron. Astrophys.) and stereophotoclinometry (Jorda et al., Icarus) techniques. We use the reconstructed models to constrain the internal density distribution based on: (i) the measurement of the offset between the center of mass and the center of figure of the object, and (ii) the assumption that flat areas observed at the surface of the comet correspond to iso-gravity surfaces. The results of our analysis will be presented, and the consequences for the internal structure and formation of the nucleus of comet 67P/C-G will be discussed.

  3. Epidermal growth factor induces tyrosine hydroxylase in a clonal pheochromocytoma cell line, PC-G2

    SciTech Connect

    Goodman, R.; Slater, E.; Herschman, H.R.

    1980-03-01

    We have previously described the isolation of a clonal cell line (PC-G2) in which the level of tyrosine hydroxylase (TH), the rate-limiting step in the synthesis of the catecholamine neurotransmitters, is induced by nerve growth factor (NGF). We now report that epidermal growth factor (EGF) also induces TH in the PC-G2 cell line. Although EFG has been shown to be mitogenic for many cultured cells, no neuronal function has been previously reported for this protein. The TH response to EGF is elicited in a dose-dependent fashion at concentrations as low as 0.1 ng/ml and is maximal at 10 ng/ml EGF. The maximal response is observed after 3 to 4 d of exposure to 10 ng/ml EGF. The induction by NGF and EGF is inhibited by their respective antisera. Dexamethasone, a synthetic glucocorticoid which we have previously shown modulates the response of PC-G2 cells to NGF, also modulates the TH induction elicited by EGF.

  4. Cosmochemical implications of CONSERT permittivity characterization of 67P/C-G

    NASA Astrophysics Data System (ADS)

    Levasseur-Regourd, A.; Hérique, Alain; Kofman, Wlodek; Beck, Pierre; Bonal, Lydie; Buttarazzi, Ilaria; Heggy, Essam; Lasue, Jeremie; Quirico, Eric; Zine, Sonia

    2016-10-01

    Unique information about the internal structure of the nucleus of comet 67P/C-G was provided by the CONSERT bistatic radar on-board Rosetta and Philae [1]. Analysis of the propagation of its signal throughout the small lobe indicated that the real part of the permittivity at 90 MHz is of (1.27±0.05). The first interpretation of this value using dielectric properties of mixtures of dust and ices (H2O, CO2), led to the conclusion that the comet porosity ranges between 75–85%. In addition, the dust/ice ratio was found to range between 0.4-2.6 and the permittivity of dust (including 30% of porosity) was determined to be lower than 2.9.The dust permittivity estimate is now reduced by taking into account the updated values of nucleus density and of dust/ice ratio, in order of providing further insights into the nature of the constituents of comet 67P/C-G [2]. We adopt a systematic approach: i) determination of the dust permittivity as a function of the ice (I) to dust (D) and vacuum (V) volume fraction; ii) comparison with the permittivity of meteoritic, mineral and organic materials from literature and laboratory measurements; iii) test of several composition models of the nucleus, corresponding to cosmochemical end members of 67P/C-G. For each of these models the location in the ternary I/D/V diagram is calculated based on available dielectric measurements, and confronted to the locus of 67P/C-G. The number of compliant models is small and the cosmochemical implications of each are discussed [2]. An important fraction of carbonaceous material is required in the dust in order to match CONSERT permittivity observations, establishing that comets represent a massive carbon reservoir.Support from Centre National d'Études Spatiales (CNES, France) for this work, based on observations with CONSERT on board Rosetta, is acknowledged. The CONSERT instrument was designed, built and operated by IPAG, LATMOS and MPS and was financially supported by CNES, CNRS, UJF/UGA, DLR and

  5. Identification and Characterization of γ-Aminobutyric Acid Uptake System GabPCg (NCgl0464) in Corynebacterium glutamicum

    PubMed Central

    Zhao, Zhi; Ma, Wen-hua; Zhou, Ning-Yi

    2012-01-01

    Corynebacterium glutamicum is widely used for industrial production of various amino acids and vitamins, and there is growing interest in engineering this bacterium for more commercial bioproducts such as γ-aminobutyric acid (GABA). In this study, a C. glutamicum GABA-specific transporter (GabPCg) encoded by ncgl0464 was identified and characterized. GabPCg plays a major role in GABA uptake and is essential to C. glutamicum growing on GABA. GABA uptake by GabPCg was weakly competed by l-Asn and l-Gln and stimulated by sodium ion (Na+). The Km and Vmax values were determined to be 41.1 ± 4.5 μM and 36.8 ± 2.6 nmol min−1 (mg dry weight [DW])−1, respectively, at pH 6.5 and 34.2 ± 1.1 μM and 67.3 ± 1.0 nmol min−1 (mg DW)−1, respectively, at pH 7.5. GabPCg has 29% amino acid sequence identity to a previously and functionally identified aromatic amino acid transporter (TyrP) of Escherichia coli but low identities to the currently known GABA transporters (17% and 15% to E. coli GabP and Bacillus subtilis GabP, respectively). The mutant RES167 Δncgl0464/pGXKZ9 with the GabPCg deletion showed 12.5% higher productivity of GABA than RES167/pGXKZ9. It is concluded that GabPCg represents a new type of GABA transporter and is potentially important for engineering GABA-producing C. glutamicum strains. PMID:22307305

  6. Can Rosetta IES Measure Charged Dust Grains at Comet 67P/C-G?

    NASA Astrophysics Data System (ADS)

    Clark, G. B.; Pollock, C. J.; Goldstein, R.; Samara, M.; Broiles, T. W.; Mandt, K.; Burch, J. L.; Sternovsky, Z.

    2014-12-01

    Comet 67P/C-G provides us with a natural laboratory to study the many open questions pertaining to dust-plasma interactions. The Rosetta spacecraft will follow Comet 67P/C-G along its trajectory through the inner solar system, giving us an unprecedented view of this dusty plasma environment. On board Rosetta is an Ion and Electron Sensor (IES) intended to measure plasma ions and electrons between ~4 eV/q and 22 keV/q. However, it is also speculated whether IES can measure charged dust grains with the correct energy-per-charge (E/q). Preliminary results [Skego et al., 2014] show that some dust grains originating from the comet and then becoming charged, likely possess the correct E/q for IES detection. However, until now, the question of microchannel plate (MCP) detection system effectiveness/efficiency in detecting these grains has been neglected. Lacking experimental results, we use current MCP models to explore the detection efficiencies of Rosetta IES to charged dust grains. We present our results, estimate fluxes, and provide a strong case for future experimental work in this field.

  7. PCG: A prototype incremental compilation facility for the SAGA environment, appendix F

    NASA Technical Reports Server (NTRS)

    Kimball, Joseph John

    1985-01-01

    A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.

  8. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  9. Fine structure of the "PcG body" in human U-2 OS cells established by correlative light-electron microscopy.

    PubMed

    Smigová, Jana; Juda, Pavel; Cmarko, Dušan; Raška, Ivan

    2011-01-01

    Polycomb group (PcG) proteins of the Polycomb repressive complex 1 (PRC1) are found to be diffusely distributed in nuclei of cells from various species. However they can also be localized in intensely fluorescent foci, whether imaged using GFP fusions to proteins of PRC1 complex, or by conventional immunofluorescence microscopy. Such foci are termed PcG bodies, and are believed to be situated in the nuclear intechromatin compartment. However, an ultrastructural description of the PcG body has not been reported to date. To establish the ultrastructure of PcG bodies in human U-2 OS cells stably expressing recombinant polycomb BMI1-GFP protein, we used correlative light-electron microscopy (CLEM) implemented with high-pressure freezing, cryosubstitution and on-section labeling of BMI1 protein with immunogold. This approach allowed us to clearly identify fluorescent PcG bodies, not as distinct nuclear bodies, but as nuclear domains enriched in separated heterochromatin fascicles. Importantly, high-pressure freezing and cryosubstitution allowed for a high and clear-cut immunogold BMI1 labeling of heterochromatin structures throughout the nucleus. The density of immunogold labeled BMI1 in the heterochromatin fascicles corresponding to fluorescent "PcG bodies" did not differ from the density of labeling of heterochromatin fascicles outside of the "PcG bodies". Accordingly, an appearance of the fluorescent "PcG bodies" seems to reflect a local accumulation of the labeled heterochromatin structures in the investigated cells. The results of this study should allow expansion of the knowledge about the biological relevance of the "PcG bodies" in human cells.

  10. Electronic Packaging Techniques

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A characteristic of aerospace system design is that equipment size and weight must always be kept to a minimum, even in small components such as electronic packages. The dictates of spacecraft design have spawned a number of high-density packaging techniques, among them methods of connecting circuits in printed wiring boards by processes called stitchbond welding and parallel gap welding. These processes help designers compress more components into less space; they also afford weight savings and lower production costs.

  11. Reptin and Pontin function antagonistically with PcG and TrxG complexes to mediate Hox gene control

    PubMed Central

    Diop, Soda Balla; Bertaux, Karine; Vasanthi, Dasari; Sarkeshik, Ali; Goirand, Benjamin; Aragnol, Denise; Tolwinski, Nicholas S; Cole, Michael D; Pradel, Jacques; Yates, John R; Mishra, Rakesh K; Graba, Yacine; Saurin, Andrew J

    2008-01-01

    Pontin (Pont) and Reptin (Rept) are paralogous ATPases that are evolutionarily conserved from yeast to human. They are recruited in multiprotein complexes that function in various aspects of DNA metabolism. They are essential for viability and have antagonistic roles in tissue growth, cell signalling and regulation of the tumour metastasis suppressor gene, KAI1, indicating that the balance of Pont and Rept regulates epigenetic programmes critical for development and cancer progression. Here, we describe Pont and Rept as antagonistic mediators of Drosophila Hox gene transcription, functioning with Polycomb group (PcG) and Trithorax group proteins to maintain correct patterns of expression. We show that Rept is a component of the PRC1 PcG complex, whereas Pont purifies with the Brahma complex. Furthermore, the enzymatic functions of Rept and Pont are indispensable for maintaining Hox gene expression states, highlighting the importance of these two antagonistic factors in transcriptional output. PMID:18259215

  12. Growing protein crystals in microgravity - The NASA Microgravity Science and Applications Division (MSAD) Protein Crystal Growth (PCG) program

    NASA Technical Reports Server (NTRS)

    Herren, B.

    1992-01-01

    In collaboration with a medical researcher at the University of Alabama at Birmingham, NASA's Marshall Space Flight Center in Huntsville, Alabama, under the sponsorship of the Microgravity Science and Applications Division (MSAD) at NASA Headquarters, is continuing a series of space experiments in protein crystal growth which could lead to innovative new drugs as well as basic science data on protein molecular structures. From 1985 through 1992, Protein Crystal Growth (PCG) experiments will have been flown on the Space Shuttle a total of 14 times. The first four hand-held experiments were used to test hardware concepts; later flights incorporated these concepts for vapor diffusion protein crystal growth with temperature control. This article provides an overview of the PCG program: its evolution, objectives, and plans for future experiments on NASA's Space Shuttle and Space Station Freedom.

  13. Disruptive collisions as the origin of 67P/C-G and small bilobate comets

    NASA Astrophysics Data System (ADS)

    Michel, Patrick; Schwartz, Stephen R.; Jutzi, Martin; Marchi, Simone; Richardson, Derek C.; Zhang, Yun

    2016-10-01

    Images of comets sent by spacecraft have shown us that bilobate shapes seem to be common in the cometary population. This has been most recently evidenced by the images of comet 67P/C-G obtained by the ESA Rosetta mission, which show a low-density elongated body interpreted as a contact binary. The origin of such bilobate comets has been thought to be primordial because it requires the slow accretion of two bodies that become the two main components of the final object. However, slow accretion does not only occur during the primordial phase of the Solar System, but also later during the reaccumulation processes immediately following collisional disruptions of larger bodies. We perform numerical simulations of disruptions of large bodies. We demonstrate that during the ensuing gravitational phase, in which the generated fragments interact under their mutual gravity, aggregates with bi-lobed or elongated shapes formed form by reaccumulation at speeds that are at or below the range of those assumed in primordial accretion scenarios [1]. The same scenario has been demonstrated to occur in the asteroid belt to explain the origin of asteroid families [2] and has provided insight into the shapes of thus-far observed asteroids such as 25143 Itokawa [3]. Here we show that it is also a more general outcome that applies to disruption events in the outer Solar System. Moreover, we show that high temperature regions are very localized during the impact process, which solves the problem of the survival of organics and volatiles in the collisional process. The advantage of this scenario for the formation of small bilobate shapes, including 67P/C-G, is that it does not necessitate a primordial origin, as such disruptions can occur at later stages of the Solar System. This demonstrates how such comets can be relatively young, consistent with other studies that show that these shapes are unlikely to be formed early on and survive the entire history of the Solar System [4

  14. Jpetra Kernel Package

    SciTech Connect

    Heroux, Michael A.

    2004-03-01

    A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs, written in Java. Jpetra is intended to provide the foundation for basic matrix and vector operations for Java developers. Jpetra provides distributed memory operations via an abstract parallel machine interface. The most common implementation of this interface will be Java sockets.

  15. Block-bordered diagonalization and parallel iterative solvers

    SciTech Connect

    Alvarado, F.; Dag, H.; Bruggencate, M. ten

    1994-12-31

    One of the most common techniques for enhancing parallelism in direct sparse matrix methods is the reorganization of a matrix into a blocked-bordered structure. Incomplete LDU factorization is a very good preconditioner for PCG in serial environments. However, the inherent sequential nature of the preconditioning step makes it less desirable in parallel environments. This paper explores the use of BBD (Blocked Bordered Diagonalization) in connection with ILU preconditioners. The paper shows that BBD-based ILU preconditioners are quite amenable to parallel processing. Neglecting entries from the entire border can result in a blocked diagonal matrix. The result is a great increase in parallelism at the expense of additional iterations. Experiments on the Sequent Symmetry shared memory machine using (mostly) power system that matrices indicate that the method is generally better than conventional ILU preconditioners and in many cases even better than partitioned inverse preconditioners, without the initial setup disadvantages of partitioned inverse preconditioners.

  16. How primordial is the structure of comet 67P/C-G (and of comets in general)?

    NASA Astrophysics Data System (ADS)

    Morbidelli, Alessandro; Jutzi, Martin; Benz, Willy; Toliou, Anastasia; Rickman, Hans; Bottke, William; Brasser, Ramon

    2016-10-01

    Several properties of the comet 67P-CG suggest that it is a primordial planetesimal. On the other hand, the size-frequency distribution (SFD) of the craters detected by the New Horizons missions at the surface of Pluto and Charon reveal that the SFD of trans-Neptunian objects smaller than 100km in diameter is very similar to that of the asteroid belt. Because the asteroid belt SFD is at collisional equilibrium, this observation suggests that the SFD of the trans-Neptunian population is at collisional equilibrium as well, implying that comet-size bodies should be the product of collisional fragmentation and not primordial objects. To test whether comet 67P-CG could be a (possibly lucky) survivor of the original population, we conducted a series of numerical impact experiments, where an object with the shape and the density of 67P-CG, and material strength varying from 10 to 1,000 Pa, is hit on the "head" by a 100m projectile at different speeds. From these experiments we derive the impact energy required to disrupt the body catastrophically, or destroy its bi-lobed shape, as a function of impact speed. Next, we consider a dynamical model where the original trans-Neptunian disk is dispersed during a phase of temporary dynamical instability of the giant planets, which successfully reproduces the scattered disk and Oort cloud populations inferred from the current fluxes of Jupiter-family and long period comets. We find that, if the dynamical dispersal of the disk occurs late, as in the Late Heavy Bombardment hypothesis, a 67P-CG-like body has a negligible probability to avoid all catastrophic collisions. During this phase, however, the collisional equilibrium SFD measured by the New Horizons mission can be established. Instead, if the dispersal of the disk occurred as soon as gas was removed, a 67P-CG-like body has about a 20% chance to avoid catastrophic collisions. Nevertheless it would still undergo 10s of reshaping collisions. We estimate that, statistically, the

  17. Monitoring Comet 67P/C-G Micrometer Dust Flux: GIADA onboard Rosetta.

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Sordini, Roberto; Lucarelli, Francesca; Zakharov, Vladimir; Fulle, Marco; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    (21)ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna The MicroBalance System (MBS) is one of the three measurement subsystems of GIADA, the Grain Impact Analyzer and Dust Accumulator on board the Rosetta/ESA spacecraft (S/C). It consists of five Quartz Crystal Microbalances (QCMs) in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. The MBS is continuously monitoring comet 67P/CG since the beginning of May 2014. During the first 4 months of measurements, before the insertion of the S/C in the bound orbit phase, there were no evidences of dust accumulation on the QCMs. Starting from the beginning of October, three out of five QCMs measured an increase of the deposited dust. The measured fluxes show, as expected, a strong anisotropy. In particular, the dust flux appears to be much higher from the Sun direction with respect to the comet direction. Acknowledgment: GIADA was built by a consortum led by the Univ. Napoli "Parthenope" & INAF- Oss. Astr. Capodimonte, in collaboration with the Inst. de Astrofisica de Andalucia, Selex-ES, FI and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with the support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developed from a PI proposal from the University of Kent; sci. & tech. contribution were provided by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project/ESTEC for their out-standing work. Science support provided was by NASA through the US Rosetta Project managed by the Jet Propulsion Laboratory/ California Institute of Technology. GIADA calibrated data will be available through ESA's PSA web site (www.rssd.esa.int/index.php? project=PSA&page=in dex). We would like to thank Angioletta

  18. The impact of Polycomb group (PcG) and Trithorax group (TrxG) epigenetic factors in plant plasticity.

    PubMed

    de la Paz Sanchez, Maria; Aceves-García, Pamela; Petrone, Emilio; Steckenborn, Stefan; Vega-León, Rosario; Álvarez-Buylla, Elena R; Garay-Arroyo, Adriana; García-Ponce, Berenice

    2015-11-01

    Current advances indicate that epigenetic mechanisms play important roles in the regulatory networks involved in plant developmental responses to environmental conditions. Hence, understanding the role of such components becomes crucial to understanding the mechanisms underlying the plasticity and variability of plant traits, and thus the ecology and evolution of plant development. We now know that important components of phenotypic variation may result from heritable and reversible epigenetic mechanisms without genetic alterations. The epigenetic factors Polycomb group (PcG) and Trithorax group (TrxG) are involved in developmental processes that respond to environmental signals, playing important roles in plant plasticity. In this review, we discuss current knowledge of TrxG and PcG functions in different developmental processes in response to internal and environmental cues and we also integrate the emerging evidence concerning their function in plant plasticity. Many such plastic responses rely on meristematic cell behavior, including stem cell niche maintenance, cellular reprogramming, flowering and dormancy as well as stress memory. This information will help to determine how to integrate the role of epigenetic regulation into models of gene regulatory networks, which have mostly included transcriptional interactions underlying various aspects of plant development and its plastic response to environmental conditions.

  19. Scoring Package

    National Institute of Standards and Technology Data Gateway

    NIST Scoring Package (PC database for purchase)   The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.

  20. GIADA on-board Rosetta: comet 67P/C-G dust coma characterization

    NASA Astrophysics Data System (ADS)

    Rotundi, Alessandra; Della Corte, Vincenzo; Fulle, Marco; Sordini, Roberto; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Lucarelli, Francesca; Zakharov, Vladimir; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    21ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna GIADA consists of three subsystems: 1) the Grain Detection System (GDS) to detect dust grains as they pass through a laser curtain, 2) the Impact Sensor (IS) to measure grain momentum derived from the impact on a plate connected to five piezoelectric sensors, and 3) the MicroBalances System (MBS); five quartz crystal microbalances in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. GDS provides data on grain speed and its optical cross section. The IS grain momentum measurement, when combined with the GDS detection time, provides a direct measurement of grain speed and mass. These combined measurements characterize single grain dust dynamics in the coma of 67P/CG. No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such high heliocentric distances are available up to date. We present here the results obtained by GIADA, which began operating in continuous mode on 18 July 2014 when the comet was at a heliocentric distance of 3.7 AU. The first grain detection occurred when the spacecraft was 814 km from the nucleus on 1 August 2014. From August the 1st up to December the 11th, GIADA detected more than 800 grains, for which the 3D spatial distribution was determined. About 700 out of 800 are GDS only detections: "dust clouds", i.e. slow dust grains (≈ 0.5 m/s) crossing the laser curtain very close in time (e.g. 129 grains in 11 s), probably fluffy grains. IS only detections are about 70, i.e. ≈ 1/10 of the GDS only. This ratio is quite different from what we got for the early detections (August - September) when the ration was ≈ 3, suggesting the presence of different types of particle (bigger, brighter, less dense).The combined GDS+IS detections, i.e. measured by both the GDS and IS detectors, are about 70 and allowed us to extract the

  1. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  2. Parallel time integration software

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  3. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  4. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  5. Monitoring 67P/C-G coma dust environment from 3.6 AU in-bound to the Sun to 2 AU out-bound

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Fulle, Marco

    2016-04-01

    GIADA, on board the Rosetta/ESA space mission is an instrument devoted to monitor the dynamical and physical properties of the dust particles emitted by comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) along its orbit, from 3.6 AU in-bound to the Sun to 2 AU out-bound. Since the 17th of July 2014 GIADA is fully operative and was able to measure the speed and mass of individual dust particles. GIADA capability of detecting dust particles with an high time resolution and the accurate characterization of the physical properties of each detected particle allowed the identification of two different families of dust particles emitted by 67P/C-G nucleus: compact particles with densities varying from about 100 kg/m3 to 3000 kg/m3 and the fluffy particles with densities down to 1kg/m^3. GIADA continuous monitoring of the coma dust environment of comet 67P/C-G along its orbit, accounted for the different geometry of the observation along Rosetta trajectories, enabled us to: 1) investigate how the dust fluxes for each particle family evolves with respect to the heliocentric distance; 2) identify the nucleus/coma regions with high dust emission/density; 3) observe the changes that this regions undergo along the comet orbit; 4) measure and monitor the dust production rate; and, 5) evaluate the 67P/C-G dust to gas ratio by coupling GIADA measurements with the results of the Rosetta instruments devoted to gas measurements (MIRO and ROSINA).

  6. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  7. corto genetically interacts with Pc-G and trx-G genes and maintains the anterior boundary of Ultrabithorax expression in Drosophila larvae.

    PubMed

    Lopez, A; Higuet, D; Rosset, R; Deutsch, J; Peronnet, F

    2001-12-01

    In Drosophila melanogaster, segment identity is determined by specific expression of homeotic genes (Hox). The Hox expression pattern is first initiated by gap and pair-rule genes and then maintained by genes of the Polycomb-group (Pc-G) and the trithorax-group (trx-G). The corto gene is a putative regulator of the Hox genes since mutants exhibit homeotic transformations. We show here that, in addition to previously reported genetic interactions with the Pc-G genes Enhancer of zeste, Polycomb and polyhomeotic, mutations in corto enhance the extra-sex-comb phenotype of multi sex combs, Polycomb-like and Sex combs on midleg. corto also genetically interacts with a number of trx-G genes (ash1, kismet, kohtalo, moira, osa, Trithorax-like and Vha55). The interactions with genes of the trx-G lead to phenotypes displayed in the wing, in the postpronotum or in the thoracic mechanosensory bristles. In addition, we analyzed the regulation of the Hox gene Ultrabithorax (Ubx) in corto mutants. Our results provide evidence that corto maintains the anterior border of Ubx expression in third-instar larvae. We suggest that this regulation is accomplished through an interaction with the products of the Pc-G and trx-G genes.

  8. Dust Impact Monitor DIM Onboard Philae: Measurements at Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Krüger, Harald; Albin, Thomas; Apathy, Istvan; Arnold, Walter; Flandes, Alberto; Fischer, Hans-Herbert; Hirn, Attila; Loose, Alexander; Peter, Attila; Seidensticker, Klaus J.; Sperl, Matthias

    2015-04-01

    The Rosetta lander Philae landed successfully on the nucleus surface of comet 67P/Churyumov-Gerasimenko on 12 November 2014. Philae is equipped with the Dust Impact Monitor (DIM) which is part of the SESAME experiment package onboard. DIM employs piezoelectric PZT sensors to detect impacts by sub-millimetre and millimetre-sized ice and dust particles that are emitted from the nucleus and transported into the cometary coma. DIM was operated during Philae's descent to its nominal landing site at 4 different altitudes above the comet surface, and at Philae's final landing site. During descent to the nominal landing site, DIM measured the impact of one rather big particle that probably had a size of a few millimeters. No impacts were detected at the final landing site which may be due to low cometary activity or due to shadowing from obstacles close to Philae, or both. We will present the results from our measurements at the comet and compare them with laboratory calibration experiments with ice/dust particles performed with a DIM flight spare sensor.

  9. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2008-09-11

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing SWB Payload Assembly; and 1.7, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence.

  10. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2009-05-27

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-Gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing Shielded Container Payload Assembly; 1.7, Preparing SWB Payload Assembly; and 1.8, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence, except as noted.

  11. Application of Russian Thermo-Electric Devices (TEDS) for the US Microgravity Program Protein Crystal Growth (PCG) Project

    NASA Technical Reports Server (NTRS)

    Aksamentov, Valery

    1996-01-01

    Changes in the former Soviet Union have opened the gate for the exchange of new technology. Interest in this work has been particularly related to Thermal Electric Cooling Devices (TED's) which have an application for the Thermal Enclosure System (TES) developed by NASA. Preliminary information received by NASA/MSFC indicates that Russian TED's have higher efficiency. Based on that assumption NASA/MSFC awarded a contract to the University of Alabama in Huntsville (UAH) in order to study the Russian TED's technology. In order to fulfill this a few steps should be made: (1) potential specifications and configurations should be defined for use of TED's in Protein Crystal Growing (PCG) thermal control hardware; and (2) work closely with the identified Russian source to define and identify potential Russian TED's to exceed the performance of available domestic TED's. Based on the data from Russia, it is possible to make plans for further steps such as buying and testing high performance TED's. To accomplish this goal two subcontracts have been released. One subcontract to Automated Sciences Group (ASG) located in Huntsville, AL and one to the International Center for Advanced Studies 'Cosmos' located in Moscow, Russia.

  12. Packaged Food

    NASA Technical Reports Server (NTRS)

    1976-01-01

    After studies found that many elderly persons don't eat adequately because they can't afford to, they have limited mobility, or they just don't bother, Innovated Foods, Inc. and JSC developed shelf-stable foods processed and packaged for home preparation with minimum effort. Various food-processing techniques and delivery systems are under study and freeze dried foods originally used for space flight are being marketed. (See 77N76140)

  13. Seafood Packaging

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with a New Orleans seafood packaging company to develop a container to improve the shipping longevity of seafood, primarily frozen and fresh fish, while preserving the taste. A NASA engineer developed metalized heat resistant polybags with thermal foam liners using an enhanced version of the metalized mylar commonly known as 'space blanket material,' which was produced during the Apollo era.

  14. Software For Diagnosis Of Parallel Processing

    NASA Technical Reports Server (NTRS)

    Hontalas, Philip; Yan, Jerry; Fineman, Charles

    1995-01-01

    Ames Instrumentation System (AIMS) computer program package of software tools measuring and analyzing performances of parallel-processing application programs. Helps programmer to debug and refine, and to monitor and visualize execution of, parallel-processing application software for Intel iPSC/860 (or equivalent) multicomputer. Performance data collected displayed graphically on computer workstations supporting X-Windows.

  15. Overview of the DOE packaging certification process

    SciTech Connect

    Liu, Y.Y.; Carlson, R.D.; Carlson, R.W.; Kapoor, A.

    1995-12-31

    This paper gives an overview of the DOE packaging certification process, which is implemented by the Office of Facility Safety Analysis, under the Assistance Secretary for Environment, Safety and Health, for packagings that are not used for weapons and weapons components, nor for naval nuclear propulsion. The overview will emphasize Type B packagings and the Safety Analysis Report for Packaging (SARP) review that parallels the NRC packaging review. Other important elements in the DOE packaging certification program, such as training, methods development, data bases, and technical assistance, are also emphasized, because they have contributed significantly to the improvement of the certification process since DOE consolidated its packaging certification function in 1985. The paper finishes with a discussion of the roles and functions of the DOE Packaging Safety Review Steering Committee, which is chartered to address issues and concerns of interest to the DOE packaging and transportation safety community. Two articles related to DOE packaging certification were published earlier on the SARP review procedures and the DOE Packaging Review Guide. These articles may be consulted for additional information.

  16. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  17. Reflective Packaging

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The aluminized polymer film used in spacecraft as a radiation barrier to protect both astronauts and delicate instruments has led to a number of spinoff applications. Among them are aluminized shipping bags, food cart covers and medical bags. Radiant Technologies purchases component materials and assembles a barrier made of layers of aluminized foil. The packaging reflects outside heat away from the product inside the container. The company is developing new aluminized lines, express mailers, large shipping bags, gel packs and insulated panels for the building industry.

  18. Global and Spatially Resolved Photometric Properties of the Nucleus of Comet 67P/C-G from OSIRIS Images

    NASA Astrophysics Data System (ADS)

    Lamy, P.

    2014-04-01

    Following the successful wake-up of the ROSETTA spacecraft on 20 January 2014, the OSIRIS imaging system was fully re-commissioned at the end of March 2014 confirming its initial excellent performances. The OSIRIS instrument includes two cameras: the Narrow Angle Camera (NAC) and the Wide Angle Camera (WAC) with respective fieldsofview of 2.2° and 12°, both equipped with 2K by 2K CCD detectors and dual filter wheels. The NAC filters allow a spectral coverage of 270 to 990 nm tailored to the investigation of the mineralogical composition of the nucleus of comet P/Churyumov- Gerasimenko whereas those of the WAC (245-632 nm) aim at characterizing its coma [1]. The NAC has already secured a set of four complete light curves of the nucleus of 67P/C-G between 3 March and 24 April 2014 with a primary purpose of characterizing its rotational state. A preliminary spin period of 12.4 hours has been obtained, similar to its very first determination from a light curve obtained in 2003 with the Hubble space telescope [2]. The NAC and WAC will be recalibrated in the forthcoming weeks using the same stellar calibrators VEGA and the solar analog 16 Cyg B as for past inflight calibration campaigns in support of the flybys of asteroids Steins and Lutetia. This will allow comparing the pre- and post-hibernation performances of the cameras and correct the quantum efficiency response of the two CCD and the throughput for all channels (i.e., filters) if required. The accurate photometric analysis of the images requires utmost care due to several instrumental problems, the most severe and complex to handle being the presence of optical ghosts which result from multiple reflections on the two filters inserted in the optical beam and on the thick window which protects the CCD detector from cosmic ray impacts. These ghosts prominently appear as either slightly defocused images offset from the primary images or large round or elliptical halos. We will first present results on the global

  19. Rosetta/VIRTIS-M spectral data: Comet 67P/CG compared to other primitive small bodies.

    NASA Astrophysics Data System (ADS)

    De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Erard, S.; Tosi, F.; Ciarniello, M.; Raponi, A.; Piccioni, G.; Leyrat, C.; Bockelée-Morvan, D.; Drossart, P.; Fornasier, S.

    2014-12-01

    VIRTIS-M, the Visible InfraRed Thermal Imaging Spectrometer, onboard the Rosetta Mission orbiter (Coradini et al., 2007) acquired data of the comet 67P/Churyumov-Gerasimenko in the 0.25-5.1 µm spectral range. The initial data, obtained during the first mission phases to the comet, allow us to derive albedo and global spectral properties of the comet nucleus as well as spectra of different areas on the nucleus. The characterization of cometary nuclei surfaces and their comparison with those of related populations such as extinct comet candidates, Centaurs, near-Earth asteroids (NEAs), trans-Neptunian objects (TNOs), and primitive asteroids is critical to understanding the origin and evolution of small solar system bodies. The acquired VIRTIS data are used to compare the global spectral properties of comet 67P/CG to published spectra of other cometary nuclei observed from ground or visited by space mission. Moreover, the spectra of 67P/Churyumov-Gerasimenko are also compared to those of primitive asteroids and centaurs. The comparison can give us clues on the possible common formation and evolutionary environment for primitive asteroids, centaurs and Jupiter-family comets. Authors acknowledge the funding from Italian and French Space Agencies. References: Coradini, A., Capaccioni, F., Drossart, P., Arnold, G., Ammannito, E., Angrilli, F., Barucci, A., Bellucci, G., Benkhoff, J., Bianchini, G., Bibring, J. P., Blecka, M., Bockelee-Morvan, D., Capria, M. T., Carlson, R., Carsenty, U., Cerroni, P., Colangeli, L., Combes, M., Combi, M., Crovisier, J., De Sanctis, M. C., Encrenaz, E. T., Erard, S., Federico, C., Filacchione, G., Fink, U., Fonti, S., Formisano, V., Ip, W. H., Jaumann, R., Kuehrt, E., Langevin, Y., Magni, G., McCord, T., Mennella, V., Mottola, S., Neukum, G., Palumbo, P., Piccioni, G., Rauer, H., Saggin, B., Schmitt, B., Tiphene, D., Tozzi, G., Space Science Reviews, Volume 128, Issue 1-4, 529-559, 2007.

  20. Challenges in the Packaging of MEMS

    SciTech Connect

    Malshe, A.P.; Singh, S.B.; Eaton, W.P.; O'Neal, C.; Brown, W.D.; Miller, W.M.

    1999-03-26

    The packaging of Micro-Electro-Mechanical Systems (MEMS) is a field of great importance to anyone using or manufacturing sensors, consumer products, or military applications. Currently much work has been done in the design and fabrication of MEMS devices but insufficient research and few publications have been completed on the packaging of these devices. This is despite the fact that packaging is a very large percentage of the total cost of MEMS devices. The main difference between IC packaging and MEMS packaging is that MEMS packaging is almost always application specific and greatly affected by its environment and packaging techniques such as die handling, die attach processes, and lid sealing. Many of these aspects are directly related to the materials used in the packaging processes. MEMS devices that are functional in wafer form can be rendered inoperable after packaging. MEMS dies must be handled only from the chip sides so features on the top surface are not damaged. This eliminates most current die pick-and-place fixtures. Die attach materials are key to MEMS packaging. Using hard die attach solders can create high stresses in the MEMS devices, which can affect their operation greatly. Low-stress epoxies can be high-outgassing, which can also affect device performance. Also, a low modulus die attach can allow the die to move during ultrasonic wirebonding resulting to low wirebond strength. Another source of residual stress is the lid sealing process. Most MEMS based sensors and devices require a hermetically sealed package. This can be done by parallel seam welding the package lid, but at the cost of further induced stress on the die. Another issue of MEMS packaging is the media compatibility of the packaged device. MEMS unlike ICS often interface with their environment, which could be high pressure or corrosive. The main conclusion we can draw about MEMS packaging is that the package affects the performance and reliability of the MEMS devices. There is a

  1. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  2. A parallel implementation of an EBE solver for the finite element method

    SciTech Connect

    Silva, R.P.; Las Casas, E.B.; Carvalho, M.L.B.

    1994-12-31

    A parallel implementation using PVM on a cluster of workstations of an Element By Element (EBE) solver using the Preconditioned Conjugate Gradient (PCG) method is described, along with an application in the solution of the linear systems generated from finite element analysis of a problem in three dimensional linear elasticity. The PVM (Parallel Virtual Machine) system, developed at the Oak Ridge Laboratory, allows the construction of a parallel MIMD machine by connecting heterogeneous computers linked through a network. In this implementation, version 3.1 of PVM is used, and 11 SLC Sun workstations and a Sun SPARC-2 model are connected through Ethernet. The finite element program is based on SDP, System for Finite Element Based Software Development, developed at the Brazilian National Laboratory for Scientific Computation (LNCC). SDP provides the basic routines for a finite element application program, as well as a standard for programming and documentation, intended to allow exchanges between research groups in different centers.

  3. Tpetra Kernel Package

    2004-03-01

    A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs. Templated on the scalar and ordinal types so that any valid floating-point type, as well as any valid integer type can be used with these classes. Other non-standard types, such as 3-by-3 matrices for the scalar type and mod-based integers for ordinal types, can also be used. Tpetra is intended to provide the foundation for basic matrix and vectormore » operations for the next generation of Trilinos preconditioners and solvers, It can be considered as the follow-on to Epetra. Tpetra provides distributed memory operations via an abstract parallel machine interface, The most common implementation of this interface will be MPI.« less

  4. Reflectance spectroscopy of natural organic solids, iron sulfides and their mixtures as refractory analogues for Rosetta/VIRTIS' surface composition analysis of 67P/CG

    NASA Astrophysics Data System (ADS)

    Moroz, Lyuba V.; Markus, Kathrin; Arnold, Gabriele; Henckel, Daniela; Kappel, David; Schade, Ulrich; Rousseau, Batiste; Quirico, Eric; Schmitt, Bernard; Capaccioni, Fabrizio; Bockelee-Morvan, Dominique; Filacchione, Gianrico; Érard, Stéphane; Leyrat, Cedric; VIRTIS Team

    2016-10-01

    Analysis of 0.25-5 µm reflectance spectra provided by the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) onboard Rosetta orbiter revealed that the surface of 67P/CG is dark from the near-UV to the IR and is enriched in refractory phases such as organic and opaque components. The broadness and complexity of the ubiquitous absorption feature around 3.2 µm suggest a variety of cometary organic constituents. For example, complex hydrocarbons (aliphatic and polycyclic aromatic) can contribute to the feature between 3.3 and 3.5 µm and to the low reflectance of the surface in the visible. Here we present the 0.25-5 µm reflectance spectra of well-characterized terrestrial hydrocarbon materials (solid oil bitumens, coals) and discuss their relevance as spectral analogues for a hydrocarbon part of 67P/CG's complex organics. However, the expected low degree of thermal processing of cometary hydrocarbons (high (H+O+N+S)/C ratios and low carbon aromaticities) suggests high IR reflectance, intense 3.3-3.5 µm absorption bands and steep red IR slopes that are not observed in the VIRTIS spectra. Fine-grained opaque refractory phases (e.g., iron sulfides, Fe-Ni alloys) intimately mixed with other surface components are likely responsible for the low IR reflectance and low intensities of absorption bands in the VIRTIS spectra of the 67P/CG surface. In particular, iron sulfides are common constituents of cometary dust, "cometary" chondritic IDPs, and efficient darkening agents in primitive carbonaceous chondrites. Their effect on reflectance spectra of an intimate mixture is strongly affected by grain size. We report and discuss the 0.25-5 µm reflectance spectra of iron sulfides (meteoritic troilite and several terrestrial pyrrhotites) ground and sieved to various particle sizes. In addition, we present reflectance spectra of several intimate mixtures of powdered iron sulfides and solid oil bitumens. Based on the reported laboratory data, we discuss the ability of

  5. Isorropia Partitioning and Load Balancing Package

    2006-09-01

    Isorropia is a partitioning and load balancing package which interfaces with the Zoltan library. Isorropia can accept input objects such as matrices and matrix-graphs, and repartition/redistribute them into a better data distribution on parallel computers. Isorropia is primarily an interface package, utilizing graph and hypergraph partitioning algorithms that are in the Zoltan library which is a third-party library to Tilinos.

  6. Packaging Your Training Materials

    ERIC Educational Resources Information Center

    Espeland, Pamela

    1977-01-01

    The types of packaging and packaging materials to use for training materials should be determined during the planning of the training programs, according to the packaging market. Five steps to follow in shopping for packaging are presented, along with a list of packaging manufacturers. (MF)

  7. Package inspection using inverse diffraction

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.

    2008-08-01

    More efficient cost-effective hand-held methods of inspecting packages without opening them are in demand for security. Recent new work in TeraHertz sources,1 millimeter waves, presents new possibilities. Millimeter waves pass through cardboard and styrofoam, common packing materials, and also pass through most materials except those with high conductivity like metals which block light and are easily spotted. Estimating refractive index along the path of the beam through the package from observations of the beam passing out of the package provides the necessary information to inspect the package and is a nonlinear problem. So we use a generalized linear inverse technique that we first developed for finding oil by reflection in geophysics.2 The computation assumes parallel slices in the packet of homogeneous material for which the refractive index is estimated. A beam is propagated through this model in a forward computation. The output is compared with the actual observations for the package and an update computed for the refractive indices. The loop is repeated until convergence. The approach can be modified for a reflection system or to include estimation of absorption.

  8. 67P/CG morphological units and VIS-IR spectral classes: a Rosetta/VIRTIS-M perspective

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; De Sanctis, Maria Cristina; Tosi, Federico; Piccioni, Giuseppe; Cerroni, Priscilla; Capria, Maria Teresa; Palomba, Ernesto; Longobardo, Andrea; Migliorini, Alessandra; Erard, Stephane; Arnold, Gabriele; Bockelee-Morvan, Dominique; Leyrat, Cedric; Schmitt, Bernard; Quirico, Eric; Barucci, Antonella; McCord, Thomas B.; Stephan, Katrin; Kappel, David

    2015-11-01

    VIRTIS-M, the 0.25-5.1 µm imaging spectrometer on Rosetta (Coradini et al., 2007), has mapped the surface of 67P/CG nucleus since July 2014 from a wide range of distances. Spectral analysis of global scale data indicate that the nucleus presents different terrains uniformly covered by a very dark (Ciarniello et al., 2015) and dehydrated organic-rich material (Capaccioni et al., 2015). The morphological units identified so far (Thomas et al., 2015; El-Maarry et al., 2015) include dust-covered brittle materials regions (like Ash, Ma'at), exposed material regions (Seth), large-scale depressions (like Hatmehit, Aten, Nut), smooth terrains units (like Hapi, Anubis, Imhotep) and consolidated surfaces (like Hathor, Anuket, Aker, Apis, Khepry, Bastet, Maftet). For each of these regions average VIRTIS-M spectra were derived with the aim to explore possible connections between morphology and spectral properties. Photometric correction (Ciarniello et al., 2015), thermal emission removal in the 3.5-5 micron range and georeferencing have been applied to I/F data in order to derive spectral indicators, e.g. VIS-IR spectral slopes, their crossing wavelength (CW) and the 3.2 µm organic material band’s depth (BD), suitable to identify and map compositional variations. Our analysis shows that smooth terrains have the lower slopes in VIS (<1.7E-3 1/µm) and IR (0.4E-3 1/µm), CW=0.75 µm and BD=8-12%. Intermediate VIS slope=1.7-1.9E-3 1/µm, and higher BD=10-12.8%, are typical of consolidated surfaces, some dust covered regions and Seth where the maximum BD=13% has been observed. Large-scale depressions and Imhotep are redder with a VIS slope of 1.9-2.1E-3 1/µm, CW at 0.85-0.9 µm and BD=8-11%. The minimum VIS-IR slopes are observed above the Hapi, in agreement with the presence of water ice sublimation and recondensation processes observed by VIRTIS in this region (De Sanctis et al., 2015).Authors acknowledge ASI, CNES, DLR and NASA financial support.References:-Coradini et al

  9. Search for regional variations of thermal and electrical properties of comet 67P/CG probed by MIRO/Rosetta

    NASA Astrophysics Data System (ADS)

    Leyrat, Cedric; Blain, Doriann; Lellouch, Emmanuel; von Allmen, Paul; Keihm, Stephen; Choukroun, Matthieu; Schloerb, Pete; Biver, Nicolas; Gulkis, Samuel; Hofstadter, Mark

    2015-11-01

    Since June 2014, The MIRO (Microwave Instrument for Rosetta Orbiter) on board the Rosetta (ESA) spacecraft observes comet 67P-CG along its heliocentric orbit from 3.25 AU to 1.24 AU. MIRO operates in millimeter and submillimeter wavelengths respectively at 190 GHz (1.56 mm) and 562 GHz (0.5 mm). While the submillimeter channel is coupled to a Chirp Transform Spectrometer (CTS) for spectroscopic analysis of the coma, both bands provide a broad-band continuum channel for sensing the thermal emission of the nucleus itself.Continuum measurements of the nucleus probe the subsurface thermal emission from two different depths. The first analysis (Schloerb et al., 2015) of data already obtained essentially in the Northern hemisphere have revealed large temperature variations with latitude, as well as distinct diurnal curves, most prominent in the 0.5 mm channel, indicating that the electric penetration depth for this channel is comparable to the diurnal thermal skin depth. Initial modelling of these data have indicated a low surface thermal inertia, in the range 10-30 J K-1 m-2 s-1/2 and probed depths of order 1-4 cm. We here investigate potential spatial variations of thermal and electrical properties by analysing separately the geomorphological regions described by Thomas et al. (2015). For each region, we select measurements corresponding to those areas, obtained at different local times and effective latitudes. We model the thermal profiles with depth and the outgoing mm and submm radiation for different values of the thermal inertia and of the ratio of the electrical to the thermal skin depth. We will present the best estimates of thermal inertia and electric/thermal depth ratios for each region selected. Additional information on subsurface temperature gradients may be inferred by using observations at varying emergence angles.The thermal emission from southern regions has been analysed by Choukroun et al (2015) during the polar night. Now that the comet has reached

  10. GIADA On-Board Rosetta: Early Dust Grain Detections and Dust Coma Characterization of Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Rotundi, A.; Della Corte, V.; Accolla, M.; Ferrari, M.; Ivanovski, S.; Lucarelli, F.; Mazzotta Epifani, E.; Sordini, R.; Palumbo, P.; Colangeli, L.; Lopez-Moreno, J. J.; Rodriguez, J.; Fulle, M.; Bussoletti, E.; Crifo, J. F.; Esposito, F.; Green, S.; Grün, E.; Lamy, P. L.; McDonnell, T.; Mennella, V.; Molina, A.; Moreno, F.; Ortiz, J. L.; Palomba, E.; Perrin, J. M.; Rodrigo, R.; Weissman, P. R.; Zakharov, V.; Zarnecki, J.

    2014-12-01

    GIADA (Grain Impact Analyzer and Dust Accumulator) flying on-board Rosetta is devoted to study the cometary dust environment of 67P/Churiumov-Gerasimenko. GIADA is composed of 3 sub-systems: the GDS (Grain Detection System), based on grain detection through light scattering; an IS (Impact Sensor), giving momentum measurement detecting the impact on a sensed plate connected with 5 piezoelectric sensors; the MBS (MicroBalances System), constituted of 5 Quartz Crystal Microbalances (QCMs), giving cumulative deposited dust mass by measuring the variations of the sensors' frequency. The combination of the measurements performed by these 3 subsystems provides: the number, the mass, the momentum and the velocity distribution of dust grains emitted from the cometary nucleus.No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such large heliocentric distances are available up to date. We present here the first results obtained from the beginning of the Rosetta scientific phase. We will report dust grains early detection at about 800 km from the nucleus in August 2014 and the following measurements that allowed us characterizing the 67P/C-G dust environment at distances less than 100 km from the nucleus and single grains dynamical properties. Acknowledgements. GIADA was built by a consortium led by the Univ. Napoli "Parthenope" & INAF-Oss. Astr. Capodimonte, IT, in collaboration with the Inst. de Astrofisica de Andalucia, ES, Selex-ES s.p.a. and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with a support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developped from a PI proposal supported by the University of Kent; sci. & tech. contribution given by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project

  11. Science packages

    NASA Astrophysics Data System (ADS)

    1997-01-01

    Primary science teachers in Scotland have a new updating method at their disposal with the launch of a package of CDi (Compact Discs Interactive) materials developed by the BBC and the Scottish Office. These were a response to the claim that many primary teachers felt they had been inadequately trained in science and lacked the confidence to teach it properly. Consequently they felt the need for more in-service training to equip them with the personal understanding required. The pack contains five disks and a printed user's guide divided up as follows: disk 1 Investigations; disk 2 Developing understanding; disks 3,4,5 Primary Science staff development videos. It was produced by the Scottish Interactive Technology Centre (Moray House Institute) and is available from BBC Education at £149.99 including VAT. Free Internet distribution of science education materials has also begun as part of the Global Schoolhouse (GSH) scheme. The US National Science Teachers' Association (NSTA) and Microsoft Corporation are making available field-tested comprehensive curriculum material including 'Micro-units' on more than 80 topics in biology, chemistry, earth and space science and physics. The latter are the work of the Scope, Sequence and Coordination of High School Science project, which can be found at http://www.gsh.org/NSTA_SSandC/. More information on NSTA can be obtained from its Web site at http://www.nsta.org.

  12. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  13. Java Parallel Secure Stream for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2001-09-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

  14. Drosophila O-GlcNAc transferase (OGT) is encoded by the Polycomb group (PcG) gene, super sex combs (sxc)

    PubMed Central

    Sinclair, Donald A. R.; Syrzycka, Monika; Macauley, Matthew S.; Rastgardani, Tara; Komljenovic, Ivana; Vocadlo, David J.; Brock, Hugh W.; Honda, Barry M.

    2009-01-01

    O-linked N-acetylglucosamine transferase (OGT) reversibly modifies serine and threonine residues of many intracellular proteins with a single β-O-linked N-acetylglucosamine residue (O-GlcNAc), and has been implicated in insulin signaling, neurodegenerative disease, cellular stress response, and other important processes in mammals. OGT also glycosylates RNA polymerase II and various transcription factors, which suggests that it might be directly involved in transcriptional regulation. We report here that the Drosophila OGT is encoded by the Polycomb group (PcG) gene, super sex combs (sxc). Furthermore, major sites of O-GlcNAc modification on polytene chromosomes correspond to PcG protein binding sites. Our results thus suggest a direct role for O-linked glycosylation by OGT in PcG-mediated epigenetic gene silencing, which is important in developmental regulation, stem cell maintenance, genomic imprinting, and cancer. In addition, we observe rescue of sxc lethality by a human Ogt cDNA transgene; thus Drosophila may provide an ideal model to study important functional roles of OGT in mammals. PMID:19666537

  15. Drosophila O-GlcNAc transferase (OGT) is encoded by the Polycomb group (PcG) gene, super sex combs (sxc).

    PubMed

    Sinclair, Donald A R; Syrzycka, Monika; Macauley, Matthew S; Rastgardani, Tara; Komljenovic, Ivana; Vocadlo, David J; Brock, Hugh W; Honda, Barry M

    2009-08-11

    O-linked N-acetylglucosamine transferase (OGT) reversibly modifies serine and threonine residues of many intracellular proteins with a single beta-O-linked N-acetylglucosamine residue (O-GlcNAc), and has been implicated in insulin signaling, neurodegenerative disease, cellular stress response, and other important processes in mammals. OGT also glycosylates RNA polymerase II and various transcription factors, which suggests that it might be directly involved in transcriptional regulation. We report here that the Drosophila OGT is encoded by the Polycomb group (PcG) gene, super sex combs (sxc). Furthermore, major sites of O-GlcNAc modification on polytene chromosomes correspond to PcG protein binding sites. Our results thus suggest a direct role for O-linked glycosylation by OGT in PcG-mediated epigenetic gene silencing, which is important in developmental regulation, stem cell maintenance, genomic imprinting, and cancer. In addition, we observe rescue of sxc lethality by a human Ogt cDNA transgene; thus Drosophila may provide an ideal model to study important functional roles of OGT in mammals. PMID:19666537

  16. High level language memory management on parallel architectures

    SciTech Connect

    Lebrun, P.; Kreymer, A.

    1989-05-01

    HEP memory management packages such as YBOS and ZEBRA have been implemented and are currently running on a variety of mainframe computers. These packages were originally designed to run on single CPU engines. Implementation of these packages on parallel machines, loosely or tightly coupled architectures is discussed. ZEBRA (CERN package) on ACP (Fermilab) is presented in detail. Design of memory management system for the new generation of ACP systems or similar parallel architectures are presented. The future of packages such as ZEBRA is not only linked to system architecture, but also to languages issues. We briefly mention penalties in using F77 with respect to other increasingly popular languages in HEP, such as C, on parallel systems. 9 refs.

  17. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  18. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  19. Genome packaging in viruses.

    PubMed

    Sun, Siyang; Rao, Venigalla B; Rossmann, Michael G

    2010-02-01

    Genome packaging is a fundamental process in a viral life cycle. Many viruses assemble preformed capsids into which the genomic material is subsequently packaged. These viruses use a packaging motor protein that is driven by the hydrolysis of ATP to condense the nucleic acids into a confined space. How these motor proteins package viral genomes had been poorly understood until recently, when a few X-ray crystal structures and cryo-electron microscopy (cryo-EM) structures became available. Here we discuss various aspects of genome packaging and compare the mechanisms proposed for packaging motors on the basis of structural information. PMID:20060706

  20. Packaging for Food Service

    NASA Technical Reports Server (NTRS)

    Stilwell, E. J.

    1985-01-01

    Most of the key areas of concern in packaging the three principle food forms for the space station were covered. It can be generally concluded that there are no significant voids in packaging materials availability or in current packaging technology. However, it must also be concluded that the process by which packaging decisions are made for the space station feeding program will be very synergistic. Packaging selection will depend heavily on the preparation mechanics, the preferred presentation and the achievable disposal systems. It will be important that packaging be considered as an integral part of each decision as these systems are developed.

  1. Waste Package Lifting Calculation

    SciTech Connect

    H. Marr

    2000-05-11

    The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculation includes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation.

  2. Linked-View Parallel Coordinate Plot Renderer

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  3. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  4. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele; Antonini, David

    2008-01-01

    This viewgraph presentation describes a comparative packaging study for use on long duration space missions. The topics include: 1) Purpose; 2) Deliverables; 3) Food Sample Selection; 4) Experimental Design Matrix; 5) Permeation Rate Comparison; and 6) Packaging Material Information.

  5. CH Packaging Operations Manual

    SciTech Connect

    Washington TRU Solutions LLC

    2005-06-13

    This procedure provides instructions for assembling the CH Packaging Drum payload assembly, Standard Waste Box (SWB) assembly, Abnormal Operations and ICV and OCV Preshipment Leakage Rate Tests on the packaging seals, using a nondestructive Helium (He) Leak Test.

  6. Creative Thinking Package

    ERIC Educational Resources Information Center

    Jones, Clive

    1972-01-01

    A look at the latest package from a British managment training organization, which explains and demonstrates creative thinking techniques, including brainstorming. The package, designed for groups of twelve or more, consists of tapes, visuals, and associated exercises. (Editor/JB)

  7. Trends in Food Packaging.

    ERIC Educational Resources Information Center

    Ott, Dana B.

    1988-01-01

    This article discusses developments in food packaging, processing, and preservation techniques in terms of packaging materials, technologies, consumer benefits, and current and potential food product applications. Covers implications due to consumer life-style changes, cost-effectiveness of packaging materials, and the ecological impact of…

  8. Packaging of electronic modules

    NASA Technical Reports Server (NTRS)

    Katzin, L.

    1966-01-01

    Study of design approaches that are taken toward optimizing the packaging of electronic modules with respect to size, shape, component orientation, interconnections, and structural support. The study does not present a solution to specific packaging problems, but rather the factors to be considered to achieve optimum packaging designs.

  9. Parallel hypergraph partitioning for scientific computing.

    SciTech Connect

    Heaphy, Robert; Devine, Karen Dragon; Catalyurek, Umit; Bisseling, Robert; Hendrickson, Bruce Alan; Boman, Erik Gunnar

    2005-07-01

    Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster.

  10. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  11. Edible packaging materials.

    PubMed

    Janjarasskul, Theeranun; Krochta, John M

    2010-01-01

    Research groups and the food and pharmaceutical industries recognize edible packaging as a useful alternative or addition to conventional packaging to reduce waste and to create novel applications for improving product stability, quality, safety, variety, and convenience for consumers. Recent studies have explored the ability of biopolymer-based food packaging materials to carry and control-release active compounds. As diverse edible packaging materials derived from various by-products or waste from food industry are being developed, the dry thermoplastic process is advancing rapidly as a feasible commercial edible packaging manufacturing process. The employment of nanocomposite concepts to edible packaging materials promises to improve barrier and mechanical properties and facilitate effective incorporation of bioactive ingredients and other designed functions. In addition to the need for a more fundamental understanding to enable design to desired specifications, edible packaging has to overcome challenges such as regulatory requirements, consumer acceptance, and scaling-up research concepts to commercial applications.

  12. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  13. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  14. Using the scalable nonlinear equations solvers package

    SciTech Connect

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  15. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  16. Packaged die heater

    SciTech Connect

    Spielberger, Richard; Ohme, Bruce Walker; Jensen, Ronald J.

    2011-06-21

    A heater for heating packaged die for burn-in and heat testing is described. The heater may be a ceramic-type heater with a metal filament. The heater may be incorporated into the integrated circuit package as an additional ceramic layer of the package, or may be an external heater placed in contact with the package to heat the die. Many different types of integrated circuit packages may be accommodated. The method provides increased energy efficiency for heating the die while reducing temperature stresses on testing equipment. The method allows the use of multiple heaters to heat die to different temperatures. Faulty die may be heated to weaken die attach material to facilitate removal of the die. The heater filament or a separate temperature thermistor located in the package may be used to accurately measure die temperature.

  17. Smart packaging for photonics

    SciTech Connect

    Smith, J.H.; Carson, R.F.; Sullivan, C.T.; McClellan, G.; Palmer, D.W.

    1997-09-01

    Unlike silicon microelectronics, photonics packaging has proven to be low yield and expensive. One approach to make photonics packaging practical for low cost applications is the use of {open_quotes}smart{close_quotes} packages. {open_quotes}Smart{close_quotes} in this context means the ability of the package to actuate a mechanical change based on either a measurement taken by the package itself or by an input signal based on an external measurement. One avenue of smart photonics packaging, the use of polysilicon micromechanical devices integrated with photonic waveguides, was investigated in this research (LDRD 3505.340). The integration of optical components with polysilicon surface micromechanical actuation mechanisms shows significant promise for signal switching, fiber alignment, and optical sensing applications. The optical and stress properties of the oxides and nitrides considered for optical waveguides and how they are integrated with micromechanical devices were investigated.

  18. The ZOOM minimization package

    SciTech Connect

    Fischler, Mark S.; Sachs, D.; /Fermilab

    2004-11-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.

  19. DMA Modulus as a Screening Parameter for Compatibility of Polymeric Containment Materials with Various Solutions for use in Space Shuttle Microgravity Protein Crystal Growth (PCG) Experiments

    NASA Technical Reports Server (NTRS)

    Wingard, Charles Doug; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Protein crystals are grown in microgravity experiments inside the Space Shuttle during orbit. Such crystals are basically grown in a five-component system containing a salt, buffer, polymer, organic and water. During these experiments, a number of different polymeric containment materials must be compatible with up to hundreds of different PCG solutions in various concentrations for durations up to 180 days. When such compatibility experiments are performed at NASA/MSFC (Marshall Space Flight Center) simultaneously on containment material samples immersed in various solutions in vials, the samples are rather small out of necessity. DMA4 modulus was often used as the primary screening parameter for such small samples as a pass/fail criterion for incompatibility issues. In particular, the TA Instruments DMA 2980 film tension clamp was used to test rubber O-rings as small in I.D. as 0.091 in. by cutting through the cross-section at one place, then clamping the stretched linear cord stock at each end. The film tension clamp was also used to successfully test short length samples of medical/surgical grade tubing with an O.D. of 0.125 in.

  20. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  1. The LCDROOT Analysis Package

    SciTech Connect

    Abe, Toshinori

    2001-10-18

    The North American Linear Collider Detector group has developed simulation and analysis program packages. LCDROOT is one of the packages, and is based on ROOT and the C++ programing language to maximally benefit from object oriented programming techniques. LCDROOT is constantly improved and now has a new topological vertex finder, ZVTOP3. In this proceeding, the features of the LCDROOT simulation are briefly described.

  2. The West: Curriculum Package.

    ERIC Educational Resources Information Center

    Public Broadcasting Service, Alexandria, VA.

    This document consists of the printed components only of a PBS curriculum package intended to be used with the 9-videotape PBS documentary series entitled "The West." The complete curriculum package includes a teacher's guide, lesson plans, a student guide, audio tapes, a video index, and promotional poster. The teacher's guide and lesson plans…

  3. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  4. Nutrition. Learning Activity Package.

    ERIC Educational Resources Information Center

    Lee, Carolyn

    This learning activity package on nutrition is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, a list of materials needed, a list of definitions, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics are…

  5. Grooming. Learning Activity Package.

    ERIC Educational Resources Information Center

    Stark, Pamela

    This learning activity package on grooming for health workers is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, a list of materials needed, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics are…

  6. WASTE PACKAGE TRANSPORTER DESIGN

    SciTech Connect

    D.C. Weddle; R. Novotny; J. Cron

    1998-09-23

    The purpose of this Design Analysis is to develop preliminary design of the waste package transporter used for waste package (WP) transport and related functions in the subsurface repository. This analysis refines the conceptual design that was started in Phase I of the Viability Assessment. This analysis supports the development of a reliable emplacement concept and a retrieval concept for license application design. The scope of this analysis includes the following activities: (1) Assess features of the transporter design and evaluate alternative design solutions for mechanical components. (2) Develop mechanical equipment details for the transporter. (3) Prepare a preliminary structural evaluation for the transporter. (4) Identify and recommend the equipment design for waste package transport and related functions. (5) Investigate transport equipment interface tolerances. This analysis supports the development of the waste package transporter for the transport, emplacement, and retrieval of packaged radioactive waste forms in the subsurface repository. Once the waste containers are closed and accepted, the packaged radioactive waste forms are termed waste packages (WP). This terminology was finalized as this analysis neared completion; therefore, the term disposal container is used in several references (i.e., the System Description Document (SDD)) (Ref. 5.6). In this analysis and the applicable reference documents, the term ''disposal container'' is synonymous with ''waste package''.

  7. TRNSYS for windows packages

    SciTech Connect

    Blair, N.J.; Beckman, W.A.; Klein, S.A.; Mitchell, J.W.

    1996-09-01

    TRNSYS 14.1 was released in 1994. This package represents a significant step forward in usability due to several graphical utility programs for DOS. These programs include TRNSHELL, which encapsulates TRNSYS functions, PRESIM, which allows the graphical creation of a simulation system, and TRNSED, which allows the easy sharing of simulations. The increase in usability leads to a decrease in the time necessary to prepare the simulation. Most TRNSYS users operate on PC computers with the Windows operating system. Therefore, the next logical step in increased usability was to port the current TRNSYS package to the Windows operating system. Several organizations worked on this conversion that has resulted in two distinct Windows packages. One package closely resembles the DOS version and includes TRNSHELL for Windows and PRESIM for Windows. The other package incorporates a general front-end, called IISIBat, that is a general simulation tool front-end. 8 figs.

  8. RH Packaging Operations Manual

    SciTech Connect

    Washington TRU Solutions LLC

    2003-09-17

    This procedure provides operating instructions for the RH-TRU 72-B Road Cask, Waste Shipping Package. In this document, ''Packaging'' refers to the assembly of components necessary to ensure compliance with the packaging requirements (not loaded with a payload). ''Package'' refers to a Type B packaging that, with its radioactive contents, is designed to retain the integrity of its containment and shielding when subject to the normal conditions of transport and hypothetical accident test conditions set forth in 10 CFR Part 71. Loading of the RH 72-B cask can be done two ways, on the RH cask trailer in the vertical position or by removing the cask from the trailer and loading it in a facility designed for remote-handling (RH). Before loading the 72-B cask, loading procedures and changes to the loading procedures for the 72-B cask must be sent to CBFO at sitedocuments@wipp.ws for approval.

  9. Modular electronics packaging system

    NASA Technical Reports Server (NTRS)

    Hunter, Don J. (Inventor)

    2001-01-01

    A modular electronics packaging system includes multiple packaging slices that are mounted horizontally to a base structure. The slices interlock to provide added structural support. Each packaging slice includes a rigid and thermally conductive housing having four side walls that together form a cavity to house an electronic circuit. The chamber is enclosed on one end by an end wall, or web, that isolates the electronic circuit from a circuit in an adjacent packaging slice. The web also provides a thermal path between the electronic circuit and the base structure. Each slice also includes a mounting bracket that connects the packaging slice to the base structure. Four guide pins protrude from the slice into four corresponding receptacles in an adjacent slice. A locking element, such as a set screw, protrudes into each receptacle and interlocks with the corresponding guide pin. A conduit is formed in the slice to allow electrical connection to the electronic circuit.

  10. Portable parallel programming in a Fortran environment

    SciTech Connect

    May, E.N.

    1989-01-01

    Experience using the Argonne-developed PARMACs macro package to implement a portable parallel programming environment is described. Fortran programs with intrinsic parallelism of coarse and medium granularity are easily converted to parallel programs which are portable among a number of commercially available parallel processors in the class of shared-memory bus-based and local-memory network based MIMD processors. The parallelism is implemented using standard UNIX (tm) tools and a small number of easily understood synchronization concepts (monitors and message-passing techniques) to construct and coordinate multiple cooperating processes on one or many processors. Benchmark results are presented for parallel computers such as the Alliant FX/8, the Encore MultiMax, the Sequent Balance, the Intel iPSC/2 Hypercube and a network of Sun 3 workstations. These parallel machines are typical MIMD types with from 8 to 30 processors, each rated at from 1 to 10 MIPS processing power. The demonstration code used for this work is a Monte Carlo simulation of the response to photons of a ''nearly realistic'' lead, iron and plastic electromagnetic and hadronic calorimeter, using the EGS4 code system. 6 refs., 2 figs., 2 tabs.

  11. Packaging Concerns/Techniques for Large Devices

    NASA Technical Reports Server (NTRS)

    Sampson, Michael J.

    2009-01-01

    This slide presentation reviews packaging challenges and options for electronic parts. The presentation includes information about non-hermetic packages, space challenges for packaging and complex package variations.

  12. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; VanDalsem, William (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  13. Parallel CFD design on network-based computer

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advanced computational fluid dynamics codes, which can be computationally expensive on mainframe supercomputers. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computing environment utilizing a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package is applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  14. Optoelectronic packaging: A review

    SciTech Connect

    Carson, R.F.

    1993-09-01

    Optoelectronics and photonics hold great potential for high data-rate communication and computing. Wide using in computing applications was limited first by device technologies and now suffers due to the need for high-precision, mass-produced packaging. The use of phontons as a medium of communication and control implies a unique set of packaging constraints that was not present in traditional telecommunications applications. The state-of-the-art in optoelectronic packaging is now driven by microelectric techniques that have potential for low cost and high volume manufacturing.

  15. Seawater Chemistry Package

    2005-11-23

    SeaChem Seawater Chemistry package provides routines to calculate pH, carbonate chemistry, density, and other quantities for seawater, based on the latest community standards. The chemistry is adapted from fortran routines provided by the OCMIP3/NOCES project, details of which are available at http://www.ipsl.jussieu.fr/OCMIP/. The SeaChem package can generate Fortran subroutines as well as Python wrappers for those routines. Thus the same code can be used by Python or Fortran analysis packages and Fortran ocean models alike.

  16. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  17. A survey of packages for large linear systems

    SciTech Connect

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to very large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user

  18. Packaging for Posterity.

    ERIC Educational Resources Information Center

    Sias, Jim

    1990-01-01

    A project in which students designed environmentally responsible food packaging is described. The problem definition; research on topics such as waste paper, plastic, metal, glass, incineration, recycling, and consumer preferences; and the presentation design are provided. (KR)

  19. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  20. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  1. The ENSDF Java Package

    SciTech Connect

    Sonzogni, A.A.

    2005-05-24

    A package of computer codes has been developed to process and display nuclear structure and decay data stored in the ENSDF (Evaluated Nuclear Structure Data File) library. The codes were written in an object-oriented fashion using the java language. This allows for an easy implementation across multiple platforms as well as deployment on web pages. The structure of the different java classes that make up the package is discussed as well as several different implementations.

  2. Battery packaging - Technology review

    SciTech Connect

    Maiser, Eric

    2014-06-16

    This paper gives a brief overview of battery packaging concepts, their specific advantages and drawbacks, as well as the importance of packaging for performance and cost. Production processes, scaling and automation are discussed in detail to reveal opportunities for cost reduction. Module standardization as an additional path to drive down cost is introduced. A comparison to electronics and photovoltaics production shows 'lessons learned' in those related industries and how they can accelerate learning curves in battery production.

  3. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele H.; Oziomek, Thomas V.

    2009-01-01

    Future long duration manned space flights beyond low earth orbit will require the food system to remain safe, acceptable and nutritious. Development of high barrier food packaging will enable this requirement by preventing the ingress and egress of gases and moisture. New high barrier food packaging materials have been identified through a trade study. Practical application of this packaging material within a shelf life test will allow for better determination of whether this material will allow the food system to meet given requirements after the package has undergone processing. The reason to conduct shelf life testing, using a variety of packaging materials, stems from the need to preserve food used for mission durations of several years. Chemical reactions that take place during longer durations may decrease food quality to a point where crew physical or psychological well-being is compromised. This can result in a reduction or loss of mission success. The rate of chemical reactions, including oxidative rancidity and staling, can be controlled by limiting the reactants, reducing the amount of energy available to drive the reaction, and minimizing the amount of water available. Water not only acts as a media for microbial growth, but also as a reactant and means by which two reactants may come into contact with each other. The objective of this study is to evaluate three packaging materials for potential use in long duration space exploration missions.

  4. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  5. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  6. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  7. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  8. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  9. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2009-06-01

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  10. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2008-09-11

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the pplication." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  11. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2008-01-12

    The purpose of this program guidance document is to provide the technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package (also known as the "RH-TRU 72-B cask") and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the C of C shall govern. The C of C states: "...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." It further states: "...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) Contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8, "Deliberate Misconduct." Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the U.S. Department of Energy (DOE) Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, "Packaging and Transportation of Radioactive Material," certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21, "Reporting of Defects and Noncompliance," regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a

  12. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  13. Food packages for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Fohey, M. F.; Sauer, R. L.; Westover, J. B.; Rockafeller, E. F.

    1978-01-01

    The paper reviews food packaging techniques used in space flight missions and describes the system developed for the Space Shuttle. Attention is directed to bite-size food cubes used in Gemini, Gemini rehydratable food packages, Apollo spoon-bowl rehydratable packages, thermostabilized flex pouch for Apollo, tear-top commercial food cans used in Skylab, polyethylene beverage containers, Skylab rehydratable food package, Space Shuttle food package configuration, duck-bill septum rehydration device, and a drinking/dispensing nozzle for Space Shuttle liquids. Constraints and testing of packaging is considered, a comparison of food package materials is presented, and typical Shuttle foods and beverages are listed.

  14. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2005-02-28

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.

  15. Food Packaging Materials

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The photos show a few of the food products packaged in Alure, a metallized plastic material developed and manufactured by St. Regis Paper Company's Flexible Packaging Division, Dallas, Texas. The material incorporates a metallized film originally developed for space applications. Among the suppliers of the film to St. Regis is King-Seeley Thermos Company, Winchester, Ma'ssachusetts. Initially used by NASA as a signal-bouncing reflective coating for the Echo 1 communications satellite, the film was developed by a company later absorbed by King-Seeley. The metallized film was also used as insulating material for components of a number of other spacecraft. St. Regis developed Alure to meet a multiple packaging material need: good eye appeal, product protection for long periods and the ability to be used successfully on a wide variety of food packaging equipment. When the cost of aluminum foil skyrocketed, packagers sought substitute metallized materials but experiments with a number of them uncovered problems; some were too expensive, some did not adequately protect the product, some were difficult for the machinery to handle. Alure offers a solution. St. Regis created Alure by sandwiching the metallized film between layers of plastics. The resulting laminated metallized material has the superior eye appeal of foil but is less expensive and more easily machined. Alure effectively blocks out light, moisture and oxygen and therefore gives the packaged food long shelf life. A major packaging firm conducted its own tests of the material and confirmed the advantages of machinability and shelf life, adding that it runs faster on machines than materials used in the past and it decreases product waste; the net effect is increased productivity.

  16. Detecting small holes in packages

    DOEpatents

    Kronberg, James W.; Cadieux, James R.

    1996-01-01

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package.

  17. Detecting small holes in packages

    DOEpatents

    Kronberg, J.W.; Cadieux, J.R.

    1996-03-19

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package are disclosed. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package. 3 figs.

  18. 78 FR 13083 - Products Having Laminated Packaging, Laminated Packaging, and Components Thereof; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-26

    ... COMMISSION Products Having Laminated Packaging, Laminated Packaging, and Components Thereof; Notice of... Commission has received a complaint entitled Products Having Laminated ] Packaging, Laminated Packaging, and... having laminated packaging, laminated packaging, and components thereof. The complaint names...

  19. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2006-04-25

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package TransporterModel II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant| (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations(CFR) §71.8. Any time a user suspects or has indications that the conditions ofapproval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  20. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2007-12-13

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  1. Packaging the MAMA module

    NASA Astrophysics Data System (ADS)

    Seals, J. Dennis

    1994-10-01

    The MAMA (Mixed Arithmetic, Multiprocessing Array) module is being developed to evaluate new packaging technologies and processing paradigms for advanced military processing systems. The architecture supports a tight mix of signal, data,and I/O processing at GFLOP throughput rates. It is fabricated using only commercial-on-the-sehlf (COTS) chips and will provide a high level of durability. Its attributes are largely the result of two new interconnection and packaging technologies. Chip-in-board packaging is used to reduce local x-y communication delays and solder joints, while significantly improving board-level packaging density. A unique 3-D interconnection technology called a cross-over cell has been developed to reduce board-to-board communication delays, drive power, glue logic, and card-edge pin-outs. These technologies enable true 3-D structures that are form, fit and connector compatible with conventional line-replacable modules. The module's design rational, packaging technology, and basic architecture will be presented in this paper.

  2. Laboratory Measurements of Synthetic Pyroxenes and their Mixtures with Iron Sulfides as Inorganic Refractory Analogues for Rosetta/VIRTIS' Surface Composition Analysis of 67P/CG

    NASA Astrophysics Data System (ADS)

    Markus, Kathrin; Arnold, Gabriele; Moroz, Ljuba; Henckel, Daniela; Kappel, David; Capaccioni, Fabrizio; Filacchione, Gianrico; Schmitt, Bernard; Tosi, Federico; Érard, Stéphane; Bockelee-Morvan, Dominique; Leyrat, Cedric; VIRTIS Team

    2016-10-01

    The Visible and InfraRed Thermal Imaging Spectrometer VIRTIS on board Rosetta provided 0.25-5.1 µm spectra of 67P/CG's surface (Capaccioni et al., 2015). Thermally corrected reflectance spectra display a low albedo of 0.06 at 0.65 µm, different red VIS and IR spectral slopes, and a broad 3.2 µm band. This absorption feature is due to refractory surface constituents attributed to organic components, but other refractory constituents influence albedo and spectral slopes. Possible contributions of inorganic components to spectral characteristics and spectral variations across the surface should be understood based on laboratory studies and spectral modeling. Although a wide range of silicate compositions was found in "cometary" anhydrous IDPs and cometary dust, Mg-rich crystalline mafic minerals are dominant silicate components. A large fraction of silicate grains are Fe-free enstatites and forsterites that are not found in terrestrial rocks but can be synthesized in order to provide a basis for laboratory studies and comparison with VIRTIS data. We report the results of the synthesis, analyses, and spectral reflectance measurements of Fe-free low-Ca pyroxenes (ortho- and clinoenstatites). These minerals are generally very bright and almost spectrally featureless. However, even trace amounts of Fe-ions produce a significant decrease in the near-UV reflectance and hence can contribute to slope variations. Iron sulfides (troilite, pyrrhotite) are among the most plausible phases responsible for the low reflectance of 67P's surface from the VIS to the NIR. The darkening efficiency of these opaque phases is strongly particle-size dependent. Here we present a series of reflectance spectra of fine-grained synthetic enstatite powders mixed in various proportions with iron sulfide powders. The influence of dark sulfides on reflectance in the near-UV to near-IR spectral ranges is investigated. This study can contribute to understand the shape of reflectance spectra of 67P

  3. Distribution of H2O and CO2 in the inner coma of 67P/CG as observed by VIRTIS-M onboard Rosetta

    NASA Astrophysics Data System (ADS)

    Capaccioni, F.

    2015-10-01

    VIRTIS (Visible, Infrared and Thermal Imaging Spectrometers) is a dual channel spectrometer; VIRTIS-M (M for Mapper) is a hyper-spectral imager covering a wide spectral range with two detectors: a CCD (VIS) ranging from 0.25 through 1.0 μm and an HgCdTe detector (IR) covering the 1.0 through 5.1 μm region. VIRTIS-M uses a slit and a scan mirror to generate images with spatial resolution of 250 μrad over a FOV of 64 mrad. The second channel is VIRTIS-H (H for High resolution), a point spectrometer with high spectral resolution (λ/Δλ=3000@3 μm) in the range 2-5 μm [1].The VIRTIS instrument has been used to investigate the molecular composition of the coma of 67P/CG by observing resonant fluorescent excitation in the 2 to 5 μm spectral region. The spectrum consists of emission bands superimposed on a background continuum. The strongest features are the bands of H2O at 2.7 μm and the CO2 band at 4.27 μm [1]. The high spectral resolution of VIRTIS-H obtains a detailed description of the fluorescent bands, while the mapping capability of VIRTIS-M extends the coverage in the spatial dimension to map and monitor the abundance of water and carbon dioxide in space and time. We have already reported [2,3,4] some preliminary observations by VIRTIS of H2O and CO2 in the coma. In the present work we perform a systematic mapping of the distribution and variability of these molecules using VIRTIS-M measurements of their band areas. All the spectra were carefully selected to avoid contamination due to nucleus radiance. A median filter is applied on the spatial dimensions of each data cube to minimise the pixel-to-pixel residual variability. This is at the expense of some reduction in the spatial resolution, which is still in the order of few tens of metres and thus adequate for the study of the spatial distribution of the volatiles. Typical spectra are shown in Figure 1

  4. TSF Interface Package

    2004-03-01

    A collection of packages of classes for interfacing to sparse and dense matrices, vectors and graphs, and to linear operators. TSF (via TSFCore, TSFCoreUtils and TSFExtended) provides the application programmer interface to any number of solvers, linear algebra libraries and preconditioner packages, providing also a sophisticated technique for combining multiple packages to solve a single problem. TSF provides a collection of abstract base classes that define the interfaces to abstract vector, matrix and linear soeratormore » objects. By using abstract interfaces, users of TSF are not limiting themselves to any one concrete library and can in fact easily combine multiple libraries to solve a single problem.« less

  5. System packager strategies

    SciTech Connect

    Hennagir, T.

    1995-03-01

    Advances in combined equipment technologies, the ability to supply fuel flexibility and new financial support structures are helping power systems packagers meet a diverse series of client and project needs. Systems packagers continue to capture orders for various size power plants around the globe. A competitive buyer`s market remains the order of the day. In cogeneration markets, clients continue to search for efficiency rather than specific output for inside-the-fence projects. Letter-perfect service remains a requisite as successful suppliers strive to meet customers` ever-changing needs for thermal and power applications.

  6. SPHINX experimenters information package

    SciTech Connect

    Zarick, T.A.

    1996-08-01

    This information package was prepared for both new and experienced users of the SPHINX (Short Pulse High Intensity Nanosecond X-radiator) flash X-Ray facility. It was compiled to help facilitate experiment design and preparation for both the experimenter(s) and the SPHINX operational staff. The major areas covered include: Recording Systems Capabilities,Recording System Cable Plant, Physical Dimensions of SPHINX and the SPHINX Test cell, SPHINX Operating Parameters and Modes, Dose Rate Map, Experiment Safety Approval Form, and a Feedback Questionnaire. This package will be updated as the SPHINX facilities and capabilities are enhanced.

  7. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  8. The gputools package enables GPU computing in R

    PubMed Central

    Buckner, Joshua; Wilson, Justin; Seligman, Mark; Athey, Brian; Watson, Stanley; Meng, Fan

    2010-01-01

    Motivation: By default, the R statistical environment does not make use of parallelism. Researchers may resort to expensive solutions such as cluster hardware for large analysis tasks. Graphics processing units (GPUs) provide an inexpensive and computationally powerful alternative. Using R and the CUDA toolkit from Nvidia, we have implemented several functions commonly used in microarray gene expression analysis for GPU-equipped computers. Results: R users can take advantage of the better performance provided by an Nvidia GPU. Availability: The package is available from CRAN, the R project's repository of packages, at http://cran.r-project.org/web/packages/gputools More information about our gputools R package is available at http://brainarray.mbni.med.umich.edu/brainarray/Rgpgpu Contact: bucknerj@umich.edu PMID:19850754

  9. AN ADA NAMELIST PACKAGE

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested

  10. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  11. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  12. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  13. Packaging Materials Properties Data

    SciTech Connect

    Leduc, D.

    1991-10-30

    Several energy absorbing materials are used in nuclear weapons component shipping containers recently designed for the Y-12 Plant Program Management Packaging Group. As a part of the independent review procedure leading to Certificates of Compliance, the U.S. Department of Energy Technical Safety Review Panels requested compression versus deflection . data on these materials. This report is a compilation of that data.

  14. Packaging materials properties data

    SciTech Connect

    Walker, M.S.

    1991-01-01

    Several energy absorbing materials are used in nuclear weapons component shipping containers recently designed for the Y-12 Plant Program Management Packaging Group. As a part of the independent review procedure leading to Certificates of Compliance, the US Department of Energy Technical Safety Review Panels requested compression versus deflection data on these materials. This report is a compilation of that data.

  15. Electro-Microfluidic Packaging

    NASA Astrophysics Data System (ADS)

    Benavides, G. L.; Galambos, P. C.

    2002-06-01

    There are many examples of electro-microfluidic products that require cost effective packaging solutions. Industry has responded to a demand for products such as drop ejectors, chemical sensors, and biological sensors. Drop ejectors have consumer applications such as ink jet printing and scientific applications such as patterning self-assembled monolayers or ejecting picoliters of expensive analytes/reagents for chemical analysis. Drop ejectors can be used to perform chemical analysis, combinatorial chemistry, drug manufacture, drug discovery, drug delivery, and DNA sequencing. Chemical and biological micro-sensors can sniff the ambient environment for traces of dangerous materials such as explosives, toxins, or pathogens. Other biological sensors can be used to improve world health by providing timely diagnostics and applying corrective measures to the human body. Electro-microfluidic packaging can easily represent over fifty percent of the product cost and, as with Integrated Circuits (IC), the industry should evolve to standard packaging solutions. Standard packaging schemes will minimize cost and bring products to market sooner.

  16. Automatic Differentiation Package

    SciTech Connect

    Gay, David M.; Phipps, Eric; Bratlett, Roscoe

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  17. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-01-01

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  18. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-11-04

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  19. Waste disposal package

    DOEpatents

    Smith, M.J.

    1985-06-19

    This is a claim for a waste disposal package including an inner or primary canister for containing hazardous and/or radioactive wastes. The primary canister is encapsulated by an outer or secondary barrier formed of a porous ceramic material to control ingress of water to the canister and the release rate of wastes upon breach on the canister. 4 figs.

  20. CH Packaging Maintenance Manual

    SciTech Connect

    Washington TRU Solutions

    2002-01-02

    This procedure provides instructions for performing inner containment vessel (ICV) and outer containment vessel (OCV) maintenance and periodic leakage rate testing on the following packaging seals and corresponding seal surfaces using a nondestructive helium (He) leak test. In addition, this procedure provides instructions for performing ICV and OCV structural pressure tests.

  1. Metric Education Evaluation Package.

    ERIC Educational Resources Information Center

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  2. Printer Graphics Package

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.

  3. High Efficiency Integrated Package

    SciTech Connect

    Ibbetson, James

    2013-09-15

    Solid-state lighting based on LEDs has emerged as a superior alternative to inefficient conventional lighting, particularly incandescent. LED lighting can lead to 80 percent energy savings; can last 50,000 hours – 2-50 times longer than most bulbs; and contains no toxic lead or mercury. However, to enable mass adoption, particularly at the consumer level, the cost of LED luminaires must be reduced by an order of magnitude while achieving superior efficiency, light quality and lifetime. To become viable, energy-efficient replacement solutions must deliver system efficacies of ≥ 100 lumens per watt (LPW) with excellent color rendering (CRI > 85) at a cost that enables payback cycles of two years or less for commercial applications. This development will enable significant site energy savings as it targets commercial and retail lighting applications that are most sensitive to the lifetime operating costs with their extended operating hours per day. If costs are reduced substantially, dramatic energy savings can be realized by replacing incandescent lighting in the residential market as well. In light of these challenges, Cree proposed to develop a multi-chip integrated LED package with an output of > 1000 lumens of warm white light operating at an efficacy of at least 128 LPW with a CRI > 85. This product will serve as the light engine for replacement lamps and luminaires. At the end of the proposed program, this integrated package was to be used in a proof-of-concept lamp prototype to demonstrate the component’s viability in a common form factor. During this project Cree SBTC developed an efficient, compact warm-white LED package with an integrated remote color down-converter. Via a combination of intensive optical, electrical, and thermal optimization, a package design was obtained that met nearly all project goals. This package emitted 1295 lm under instant-on, room-temperature testing conditions, with an efficacy of 128.4 lm/W at a color temperature of ~2873

  4. Eclipse Parallel Tools Platform

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  5. Packaging design criteria for the Hanford Ecorok Packaging

    SciTech Connect

    Mercado, M.S.

    1996-01-19

    The Hanford Ecorok Packaging (HEP) will be used to ship contaminated water purification filters from K Basins to the Central Waste Complex. This packaging design criteria documents the design of the HEP, its intended use, and the transportation safety criteria it is required to meet. This information will serve as a basis for the safety analysis report for packaging.

  6. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  7. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  9. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  10. Parallel Analog-to-Digital Image Processor

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.

    1987-01-01

    Proposed integrated-circuit network of many identical units convert analog outputs of imaging arrays of x-ray or infrared detectors to digital outputs. Converter located near imaging detectors, within cryogenic detector package. Because converter output digital, lends itself well to multiplexing and to postprocessing for correction of gain and offset errors peculiar to each picture element and its sampling and conversion circuits. Analog-to-digital image processor is massively parallel system for processing data from array of photodetectors. System built as compact integrated circuit located near local plane. Buffer amplifier for each picture element has different offset.

  11. Sustainable Library Development Training Package

    ERIC Educational Resources Information Center

    Peace Corps, 2012

    2012-01-01

    This Sustainable Library Development Training Package supports Peace Corps' Focus In/Train Up strategy, which was implemented following the 2010 Comprehensive Agency Assessment. Sustainable Library Development is a technical training package in Peace Corps programming within the Education sector. The training package addresses the Volunteer…

  12. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2013-07-31

    This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

  13. KAPPA -- Kernel Application Package

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Berry, David. S.

    KAPPA is an applications package comprising about 180 general-purpose commands for image processing, data visualisation, and manipulation of the standard Starlink data format---the NDF. It is intended to work in conjunction with Starlink's various specialised packages. In addition to the NDF, KAPPA can also process data in other formats by using the `on-the-fly' conversion scheme. Many commands can process data arrays of arbitrary dimension, and others work on both spectra and images. KAPPA operates from both the UNIX C-shell and the ICL command language. This document describes how to use KAPPA and its features. There is some description of techniques too, including a section on writing scripts. This document includes several tutorials and is illustrated with numerous examples. The bulk of this document comprises detailed descriptions of each command as well as classified and alphabetical summaries.

  14. TIDEV: Tidal Evolution package

    NASA Astrophysics Data System (ADS)

    Cuartas-Restrepo, P.; Melita, M.; Zuluaga, J.; Portilla, B.; Sucerquia, M.; Miloni, O.

    2016-09-01

    TIDEV (Tidal Evolution package) calculates the evolution of rotation for tidally interacting bodies using Efroimsky-Makarov-Williams (EMW) formalism. The package integrates tidal evolution equations and computes the rotational and dynamical evolution of a planet under tidal and triaxial torques. TIDEV accounts for the perturbative effects due to the presence of the other planets in the system, especially the secular variations of the eccentricity. Bulk parameters include the mass and radius of the planet (and those of the other planets involved in the integration), the size and mass of the host star, the Maxwell time and Andrade's parameter. TIDEV also calculates the time scale that a planet takes to be tidally locked as well as the periods of rotation reached at the end of the spin-orbit evolution.

  15. Anticounterfeit packaging technologies

    PubMed Central

    Shah, Ruchir Y.; Prajapati, Prajesh N.; Agrawal, Y. K.

    2010-01-01

    Packaging is the coordinated system that encloses and protects the dosage form. Counterfeit drugs are the major cause of morbidity, mortality, and failure of public interest in the healthcare system. High price and well-known brands make the pharma market most vulnerable, which accounts for top priority cardiovascular, obesity, and antihyperlipidemic drugs and drugs like sildenafil. Packaging includes overt and covert technologies like barcodes, holograms, sealing tapes, and radio frequency identification devices to preserve the integrity of the pharmaceutical product. But till date all the available techniques are synthetic and although provide considerable protection against counterfeiting, have certain limitations which can be overcome by the application of natural approaches and utilization of the principles of nanotechnology. PMID:22247875

  16. The Ettention software package.

    PubMed

    Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp

    2016-02-01

    We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. PMID:26686659

  17. Anticounterfeit packaging technologies.

    PubMed

    Shah, Ruchir Y; Prajapati, Prajesh N; Agrawal, Y K

    2010-10-01

    Packaging is the coordinated system that encloses and protects the dosage form. Counterfeit drugs are the major cause of morbidity, mortality, and failure of public interest in the healthcare system. High price and well-known brands make the pharma market most vulnerable, which accounts for top priority cardiovascular, obesity, and antihyperlipidemic drugs and drugs like sildenafil. Packaging includes overt and covert technologies like barcodes, holograms, sealing tapes, and radio frequency identification devices to preserve the integrity of the pharmaceutical product. But till date all the available techniques are synthetic and although provide considerable protection against counterfeiting, have certain limitations which can be overcome by the application of natural approaches and utilization of the principles of nanotechnology. PMID:22247875

  18. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  19. Aquaculture information package

    SciTech Connect

    Boyd, T.; Rafferty, K.

    1998-08-01

    This package of information is intended to provide background information to developers of geothermal aquaculture projects. The material is divided into eight sections and includes information on market and price information for typical species, aquaculture water quality issues, typical species culture information, pond heat loss calculations, an aquaculture glossary, regional and university aquaculture offices and state aquaculture permit requirements. A bibliography containing 68 references is also included.

  20. Software packager user's guide

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.

  1. ISSUES ASSOCIATED WITH SAFE PACKAGING AND TRANSPORT OF NANOPARTICLES

    SciTech Connect

    Gupta, N.; Smith, A.

    2011-02-14

    Nanoparticles have long been recognized a hazardous substances by personnel working in the field. They are not, however, listed as a separate, distinct category of dangerous goods at present. As dangerous goods or hazardous substances, they require packaging and transportation practices which parallel the established practices for hazardous materials transport. Pending establishment of a distinct category for such materials by the Department of Transportation, existing consensus or industrial protocols must be followed. Action by DOT to establish appropriate packaging and transport requirements is recommended.

  2. Navy packaging standardization thrusts

    NASA Astrophysics Data System (ADS)

    Kidwell, J. R.

    1982-11-01

    Standardization is a concept that is basic to our world today. The idea of reducing costs through the economics of mass production is an easy one to grasp. Henry Ford started the process of large scale standardization in this country with the Detroit production lines for his automobiles. In the process additional benefits accrued, such as improved reliability through design maturity, off-the-shelf repair parts, faster repair time, and a resultant lower cost of ownership (lower life-cycle cost). The need to attain standardization benefits with military equipments exists now. Defense budgets, although recently increased, are not going to permit us to continue the tremendous investment required to maintain even the status quo and develop new hardware at the same time. Needed are more reliable, maintainable, testable hardware in the Fleet. It is imperative to recognize the obsolescence problems created by the use of high technology devices in our equipments, and find ways to combat these shortfalls. The Navy has two packaging standardization programs that will be addressed in this paper; the Standard Electronic Modules and the Modular Avionics Packaging programs. Following a brief overview of the salient features of each program, the packaging technology aspects of the program will be addressed, and developmental areas currently being investigated will be identified.

  3. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  4. Plutonium stabilization and packaging system

    SciTech Connect

    1996-05-01

    This document describes the functional design of the Plutonium Stabilization and Packaging System (Pu SPS). The objective of this system is to stabilize and package plutonium metals and oxides of greater than 50% wt, as well as other selected isotopes, in accordance with the requirements of the DOE standard for safe storage of these materials for 50 years. This system will support completion of stabilization and packaging campaigns of the inventory at a number of affected sites before the year 2002. The package will be standard for all sites and will provide a minimum of two uncontaminated, organics free confinement barriers for the packaged material.

  5. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  6. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  7. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  8. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  9. Heating and sterilization technology for long-duration space missions: transport processes in a reusable package.

    PubMed

    Sastry, Sudhir K; Jun, Soojin; Somavat, Romel; Samaranayake, Chaminda; Yousef, Ahmed; Pandit, Ram B

    2009-04-01

    Long-duration space missions require a high-quality, shelf-stable food supply but must also contend with packaging waste after use. We have developed a package, adapted from a military pouch, that enables heating of foods to serving temperature. After the food is consumed, the package may be reused for containment and sterilization of waste, and, potentially, for packaging and sterilizing foods grown on a Mars base. Packages are equipped with electrodes to permit ohmic heating of internal constituents. Heat transfer within the package was modeled using the energy transport equation, coupled with the Laplace equation for electric field strength distribution. The model was verified by temperature measurements during a sample experimental run, and it was used to optimize the package design. Waste sterilization within the package was also studied and confirmed. Mass transfer (electrode component migration) was studied by inductively coupled plasma mass spectrometry; the findings have shown concentrations within products to be well below current daily dietary exposure levels. Microbiological studies for sterilization indicated the need for package redesign to ensure parallel electrode configuration, as well as the use of supplemental external heaters along the nonelectrode walls of the package. Temperature profiles during heating of these packages have been determined.

  10. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  11. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  12. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  13. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  14. Teuchos Utility Package

    2004-03-01

    Teuchos is designed to provide portable, object-oriented tools for Trillnos developers and users. This includes templated wrappers to BLAS/LAPACK, a serial dense matrix class, a parameter list, XML parsing utilities, reference counted pointer (smart pointer) utilities, and more. These tools are designed to run on both serial and parallel computers.

  15. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  16. Artificial intelligence in parallel

    SciTech Connect

    Waldrop, M.M.

    1984-08-10

    The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

  17. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  18. Packaging - Materials review

    SciTech Connect

    Herrmann, Matthias

    2014-06-16

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  19. Packaging - Materials review

    NASA Astrophysics Data System (ADS)

    Herrmann, Matthias

    2014-06-01

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  20. The LEOS Interpolation Package

    SciTech Connect

    Fritsch, F N

    2003-03-12

    This report describes the interpolation package in the Livermore Equation of State (LEOS) system. It is an updated and expanded version of report [1], which described the status of the package as of May 1998, and of [2], which described its status as of the August 2001 release of the LEOS access library, and of [3], which described its status as of library version 7.02, released April 2002. This corresponds to library version 7.11, released March 2003. The main change since [3] has been the addition of the monotone bicubic Hermite (bimond) interpolation method. Throughout this report we assume that data has been given for some function f({rho},T) on a rectangular mesh {rho} = {rho}{sub 0}, {rho}{sub 1}, ..., {rho}{sub nr-1}; T = T{sub 0}, T{sub 1}, ..., T{sub nt-1}. Subscripting is from zero to be consistent with the C code. (Although we use this notation throughout, there is nothing in the package that assumes that the independent variables are actually density and temperature.) The data values are f{sub ij} = f({rho}{sub j},T{sub i}). (This subscript order is historical and reflects the notation used in the program.) There are nr x nt data values, (nr-1) x (nt-1) mesh rectangles (boxes). In the C code, the data array is one-dimensional, with data [i*(nr-1)+j] = f({rho}{sub j},T{sub i}). In the case of the few univariate functions supported by LEOS, the T variable is omitted, as well as the associated index on the data array: data [j] = f({rho}{sub j}).

  1. Mother-baby package.

    PubMed

    Tamburlini, G

    1995-07-01

    The World Health Organization (WHO) Maternal Health and Safe Motherhood Programme developed the Mother-Baby package to facilitate the development of national strategies and plans of action. It was presented at an international meeting in Geneva in April 1994. The goals of the package are by the year 2000 to reduce maternal mortality by half and perinatal and neonatal mortality by 30-40% of 1990 levels. The package comprises: 1) a section on the technical basis and underlying strategies, 2) a section describing intervention before and during pregnancy, and during and after delivery, and 3) detailed recommendations on operating the program. The underlying strategy aims to reduce the number of high-risk and unwanted pregnancies; the number of obstetric complications; and the case fatality rate in women with complications. Interventions are based on a fourfold approach of family planning, quality antenatal care, clean and safe delivery, and access to essential obstetric care for high-risk pregnancies and complications. The district health system is the basic unit for planning and implementing the interventions. Midwives who live in the community are best equipped to provide appropriate community-based care to pregnant women. Pregnancy and obstetric complications requiring surgery and anesthesia should be available in the district hospital with an adequate referral system. Upgrading the skills of traditional birth attendants is also essential. National authorities should undertake a series of steps to carry out the interventions. A basic infrastructure, the upgrading of peripheral facilities, the development of human resources for safe motherhood, the effective delegation of responsibility, information, education, and communication (IEC), the involvement of nongovernmental organizations and women's groups, and the monitoring of results are other important elements in carrying out the interventions.

  2. New package for CMOS sensors

    NASA Astrophysics Data System (ADS)

    Diot, Jean-Luc; Loo, Kum Weng; Moscicki, Jean-Pierre; Ng, Hun Shen; Tee, Tong Yan; Teysseyre, Jerome; Yap, Daniel

    2004-02-01

    Cost is the main drawback of existing packages for C-MOS sensors (mainly CLCC family). Alternative packages are thus developed world-wide. And in particular, S.T.Microelectronics has studied a low cost alternative packages based on QFN structure, still with a cavity. Intensive work was done to optimize the over-molding operation forming the cavity onto a metallic lead-frame (metallic lead-frame is a low cost substrate allowing very good mechanical definition of the final package). Material selection (thermo-set resin and glue for glass sealing) was done through standard reliability tests for cavity packages (Moisture Sensitivity Level 3 followed by temperature cycling, humidity storage and high temperature storage). As this package concept is new (without leads protruding the molded cavity), the effect of variation of package dimensions, as well as board lay-out design, are simulated on package life time (during temperature cycling, thermal mismatch between board and package leads to thermal fatigue of solder joints). These simulations are correlated with an experimental temperature cycling test with daisy-chain packages.

  3. Components of Adenovirus Genome Packaging

    PubMed Central

    Ahi, Yadvinder S.; Mittal, Suresh K.

    2016-01-01

    Adenoviruses (AdVs) are icosahedral viruses with double-stranded DNA (dsDNA) genomes. Genome packaging in AdV is thought to be similar to that seen in dsDNA containing icosahedral bacteriophages and herpesviruses. Specific recognition of the AdV genome is mediated by a packaging domain located close to the left end of the viral genome and is mediated by the viral packaging machinery. Our understanding of the role of various components of the viral packaging machinery in AdV genome packaging has greatly advanced in recent years. Characterization of empty capsids assembled in the absence of one or more components involved in packaging, identification of the unique vertex, and demonstration of the role of IVa2, the putative packaging ATPase, in genome packaging have provided compelling evidence that AdVs follow a sequential assembly pathway. This review provides a detailed discussion on the functions of the various viral and cellular factors involved in AdV genome packaging. We conclude by briefly discussing the roles of the empty capsids, assembly intermediates, scaffolding proteins, portal vertex and DNA encapsidating enzymes in AdV assembly and packaging. PMID:27721809

  4. Aristos Optimization Package

    SciTech Connect

    Ridzal, Danis

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.

  5. KAPPA: Kernel Applications Package

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Berry, David S.

    2014-03-01

    KAPPA comprising about 180 general-purpose commands for image processing, data visualization, and manipulation of the standard Starlink data format--the NDF. It works with Starlink's various specialized packages; in addition to the NDF, KAPPA can also process data in other formats by using the "on-the-fly" conversion scheme. Many commands can process data arrays of arbitrary dimension, and others work on both spectra and images. KAPPA operates from both the UNIX C-shell and the ICL command language. KAPPA uses the Starlink environment (ascl:1110.012).

  6. Aristos Optimization Package

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less

  7. Safety Analysis Report for packaging (onsite) steel waste package

    SciTech Connect

    BOEHNKE, W.M.

    2000-07-13

    The steel waste package is used primarily for the shipment of remote-handled radioactive waste from the 324 Building to the 200 Area for interim storage. The steel waste package is authorized for shipment of transuranic isotopes. The maximum allowable radioactive material that is authorized is 500,000 Ci. This exceeds the highway route controlled quantity (3,000 A{sub 2}s) and is a type B packaging.

  8. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  9. Japan's electronic packaging technologies

    NASA Technical Reports Server (NTRS)

    Tummala, Rao R.; Pecht, Michael

    1995-01-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  10. Japan's electronic packaging technologies

    NASA Astrophysics Data System (ADS)

    Tummala, Rao R.; Pecht, Michael

    1995-02-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  11. Tamper indicating packaging

    SciTech Connect

    Baumann, M.J.; Bartberger, J.C.; Welch, T.D.

    1994-08-01

    Protecting sensitive items from undetected tampering in an unattended environment is crucial to the success of non-proliferation efforts relying on the verification of critical activities. Tamper Indicating Packaging (TIP) technologies are applied to containers, packages, and equipment that require an indication of a tamper attempt. Examples include: the transportation and storage of nuclear material, the operation and shipment of surveillance equipment and monitoring sensors, and the retail storage of medicine and food products. The spectrum of adversarial tampering ranges from attempted concealment of a pin-hole sized penetration to the complete container replacement, which would involve counterfeiting efforts of various degrees. Sandia National Laboratories (SNL) has developed a technology base for advanced TIP materials, sensors, designs, and processes which can be adapted to various future monitoring systems. The purpose of this technology base is to investigate potential new technologies, and to perform basic research of advanced technologies. This paper will describe the theory of TIP technologies and recent investigations of TIP technologies at SNL.

  12. Performance characteristics of a cosmology package on leading HPCarchitectures

    SciTech Connect

    Carter, Jonathan; Borrill, Julian; Oliker, Leonid

    2004-01-01

    The Cosmic Microwave Background (CMB) is a snapshot of the Universe some 400,000 years after the Big Bang. The pattern of anisotropies in the CMB carries a wealth of information about the fundamental parameters of cosmology. Extracting this information is an extremely computationally expensive endeavor, requiring massively parallel computers and software packages capable of exploiting them. One such package is the Microwave Anisotropy Dataset Computational Analysis Package (MADCAP) which has been used to analyze data from a number of CMB experiments. In this work, we compare MADCAP performance on the vector-based Earth Simulator (ES) and Cray X1 architectures and two leading superscalar systems, the IBM Power3 and Power4. Our results highlight the complex interplay between the problem size, architectural paradigm, interconnect, and vendor-supplied numerical libraries, while isolating the I/O file system as the key bottleneck across all the platforms.

  13. The reduction of packaging waste

    SciTech Connect

    Raney, E.A.; McCollom, M.; Hogan, J.

    1993-04-01

    Nationwide, packaging waste comprises approximately one third of the waste being sent to our solid waste landfills. These wastes range from product and shipping containers made from plastic, glass, wood, and corrugated cardboard to packaging fillers and wraps made from a variety of plastic materials such as shrink wrap and polystyrene peanuts. The amount of packaging waste generated is becoming an important issue for manufacturers, retailers, and consumers. Elimination of packaging not only conserves precious landfill space, it also reduces consumption of raw materials and energy, all of which result in important economic and environmental benefits. At the US Department of Energy-Richland Field Office's (DOE-RL) Hanford Site as well as other DOE sites the generation of packaging waste has added importance. By reducing the amount of packaging waste, DOE also reduces the costs and liabilities associated with waste handling, treatment, storage, and disposal.

  14. The reduction of packaging waste

    SciTech Connect

    Raney, E.A.; McCollom, M.; Hogan, J.

    1993-04-01

    Nationwide, packaging waste comprises approximately one third of the waste being sent to our solid waste landfills. These wastes range from product and shipping containers made from plastic, glass, wood, and corrugated cardboard to packaging fillers and wraps made from a variety of plastic materials such as shrink wrap and polystyrene peanuts. The amount of packaging waste generated is becoming an important issue for manufacturers, retailers, and consumers. Elimination of packaging not only conserves precious landfill space, it also reduces consumption of raw materials and energy, all of which result in important economic and environmental benefits. At the US Department of Energy-Richland Field Office`s (DOE-RL) Hanford Site as well as other DOE sites the generation of packaging waste has added importance. By reducing the amount of packaging waste, DOE also reduces the costs and liabilities associated with waste handling, treatment, storage, and disposal.

  15. Space station power semiconductor package

    NASA Technical Reports Server (NTRS)

    Balodis, Vilnis; Berman, Albert; Devance, Darrell; Ludlow, Gerry; Wagner, Lee

    1987-01-01

    A package of high-power switching semiconductors for the space station have been designed and fabricated. The package includes a high-voltage (600 volts) high current (50 amps) NPN Fast Switching Power Transistor and a high-voltage (1200 volts), high-current (50 amps) Fast Recovery Diode. The package features an isolated collector for the transistors and an isolated anode for the diode. Beryllia is used as the isolation material resulting in a thermal resistance for both devices of .2 degrees per watt. Additional features include a hermetical seal for long life -- greater than 10 years in a space environment. Also, the package design resulted in a low electrical energy loss with the reduction of eddy currents, stray inductances, circuit inductance, and capacitance. The required package design and device parameters have been achieved. Test results for the transistor and diode utilizing the space station package is given.

  16. Parallel Analysis Tools for Ultra-Large Climate Data Sets

    NASA Astrophysics Data System (ADS)

    Jacob, Robert; Krishna, Jayesh; Xu, Xiabing; Mickelson, Sheri; Wilde, Mike; Peterson, Kara; Bochev, Pavel; Latham, Robert; Tautges, Tim; Brown, David; Brownrigg, Richard; Haley, Mary; Shea, Dennis; Huang, Wei; Middleton, Don; Schuchardt, Karen; Yin, Jian

    2013-04-01

    While climate models have used parallelism for several years, the post-processing tools are still mostly single-threaded applications and many are closed source. These tools are becoming a bottleneck in the production of new climate knowledge when they confront terabyte-sized output from high-resolution climate models. The ParVis project is using and creating Free and Open Source tools that bring data and task parallelism to climate model analysis to enable analysis of large climate data sets. ParVis is using the Swift task-parallel language to implement a diagnostic suite that generates over 600 plots of atmospheric quantities. ParVis has also created a Parallel Gridded Analysis Library (ParGAL) which implements many common climate analysis operations in a data-parallel fashion using the Message Passing Interface. ParGAL has in turn been built on sophisticated packages for describing grids in parallel (the Mesh Oriented database (MOAB), performing vector operations on arbitrary grids (Intrepid) and reading data in parallel (PnetCDF). ParGAL is being used to implement a parallel version of the NCAR Command Language (NCL) called ParNCL. ParNCL/ParCAL not only speeds up analysis of large datasets but also allows operations to be performed on native grids, eliminating the need to transform data to latitude-longitude grids. All of the tools ParVis is creating are available as free and open source software.

  17. IN-PACKAGE CHEMISTRY ABSTRACTION

    SciTech Connect

    E. Thomas

    2005-07-14

    This report was developed in accordance with the requirements in ''Technical Work Plan for Postclosure Waste Form Modeling'' (BSC 2005 [DIRS 173246]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as a function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model, which uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model, which is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials, and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed (CDSP) waste packages containing high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor diffusing into the waste package, and (2) seepage water entering the waste package as a liquid from the drift. (1) Vapor-Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H{sub 2}O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Liquid-Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package.

  18. User's Guide for ENSAERO_FE Parallel Finite Element Solver

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Guruswamy, Guru P.

    1999-01-01

    A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.

  19. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  20. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  1. Naval Waste Package Design Report

    SciTech Connect

    M.M. Lewis

    2004-03-15

    A design methodology for the waste packages and ancillary components, viz., the emplacement pallets and drip shields, has been developed to provide designs that satisfy the safety and operational requirements of the Yucca Mountain Project. This methodology is described in the ''Waste Package Design Methodology Report'' Mecham 2004 [DIRS 166168]. To demonstrate the practicability of this design methodology, four waste package design configurations have been selected to illustrate the application of the methodology. These four design configurations are the 21-pressurized water reactor (PWR) Absorber Plate waste package, the 44-boiling water reactor (BWR) waste package, the 5-defense high-level waste (DHLW)/United States (U.S.) Department of Energy (DOE) spent nuclear fuel (SNF) Co-disposal Short waste package, and the Naval Canistered SNF Long waste package. Also included in this demonstration is the emplacement pallet and continuous drip shield. The purpose of this report is to document how that design methodology has been applied to the waste package design configurations intended to accommodate naval canistered SNF. This demonstrates that the design methodology can be applied successfully to this waste package design configuration and support the License Application for construction of the repository.

  2. Hazardous materials package performance regulations

    SciTech Connect

    Russell, N. A.; Glass, R. E.; McClure, J. D.; Finley, N. C.

    1991-01-01

    This paper discusses a hazardous materials Hazmat Packaging Performance Evaluation (HPPE) project being conducted at Sandia National Laboratories for the US Department of Transportation Research Special Programs Administration (DOT-RSPA) to look at the subset of bulk packagings that are larger than 2000 gallons. The objectives of this project are to evaluate current hazmat specification packagings and develop supporting documentation for determining performance requirements for packagings in excess of 2000 gallons that transport hazardous materials that have been classified as extremely toxic by inhalation (METBI).

  3. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  4. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  5. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  6. SPINning parallel systems software.

    SciTech Connect

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  7. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  8. Packaging Design Criteria for the Steel Waste Package

    SciTech Connect

    BOEHNKE, W.M.

    2000-10-19

    This packaging design criteria provides the criteria for the design, fabrication, safety evaluation, and use of the steel waste package (SWP) to transport remote-handled waste and special-case waste from the 324 facility to Central Waste Complex (CWC) for interim storage.

  9. Anhydrous Ammonia Training Module. Trainer's Package. Participant's Package.

    ERIC Educational Resources Information Center

    Beaudin, Bart; And Others

    This document contains a trainer's and a participant's package for teaching employees on site safe handling procedures for working with anhydrous ammonia, especially on farms. The trainer's package includes the following: a description of the module; a competency; objectives; suggested instructional aids; a training outline (or lesson plan) for…

  10. Package Up Your Troubles--An Introduction to Package Libraries

    ERIC Educational Resources Information Center

    Frank, Colin

    1978-01-01

    Discusses a "package deal" library--a prefabricated building including interior furnishing--in terms of costs, fitness for purpose, and interior design, i.e., shelving, flooring, heating, lighting, and humidity. Advantages and disadvantages of the package library are also considered. (Author/MBR)

  11. Praxis I/O package

    SciTech Connect

    Holloway, F.W.; Sherman, T.A.

    1988-04-07

    The Praxis language specification, like Algol and Ada, does not specify any I/O statements. The intent was to provide a standard I/O package as a companion to the compiler. This would allow the user to substitute, or supplement, the I/O package, as needed, for specialized applications. Like Algol, however, Praxis provided only limited (text) I/O for several years. Ada, in contrast, provided a comprehensive standard I/O package from its inception. Digital Equipment Corporation's (DEC's) implementation of Ada, on their VAX family of computers, further supplemented this package with other packages which exploit the I/O facilities available under the VMS operating system. The Praxis I/O package described in this document has been modeled after DEC's implementation of Ada and provides a similar set of I/O facilities. Currently, the I/O package is supported only under VAX/VMS. The design of the package, however, is essentially independent of any operating system (with the exception of the module COMMAND IO). The VAX/VMS version of the I/O package fully exploits the vast I/O facilities which are provided under VAX/VMS and makes them directly available to the Praxis programmer. The design, prototype implementation, and draft documentation of the Praxis I/O Package was done by Tim Sherman as part of a University project in computer science. Subsequent work by both Tim and Fred Holloway lead to a more complete implementation, testing and development of example programs, and inclusion of the package into the Praxis compilers as their principal interface to RMS and VMS.

  12. Piecewise Cubic Interpolation Package

    1982-04-23

    PCHIP (Piecewise Cubic Interpolation Package) is a set of subroutines for piecewise cubic Hermite interpolation of data. It features software to produce a monotone and "visually pleasing" interpolant to monotone data. Such an interpolant may be more reasonable than a cubic spline if the data contain both 'steep' and 'flat' sections. Interpolation of cumulative probability distribution functions is another application. In PCHIP, all piecewise cubic functions are represented in cubic Hermite form; that is, f(x)more » is determined by its values f(i) and derivatives d(i) at the breakpoints x(i), i=1(1)N. PCHIP contains three routines - PCHIM, PCHIC, and PCHSP to determine derivative values, six routines - CHFEV, PCHFE, CHFDV, PCHFD, PCHID, and PCHIA to evaluate, differentiate, or integrate the resulting cubic Hermite function, and one routine to check for monotonicity. A FORTRAN 77 version and SLATEC version of PCHIP are included.« less

  13. Anasazi Block Eigensolvers Package

    2004-03-01

    ANASAZI is an extensible and interoperable framework for large-scale eigenvalue algorithms. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale eigenvalue problems. ANASAZI is interoperable because both the matrix and vectors (defining the eigenspace) are considered to be opaque objects---only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Anasazi is accomplished via the use of interfaces. One of themore » goals of ANASAZI is to allow the user the flexibility to specify the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based and preconditioned eigensolvers.« less

  14. Tritium waste package

    DOEpatents

    Rossmassler, Rich; Ciebiera, Lloyd; Tulipano, Francis J.; Vinson, Sylvester; Walters, R. Thomas

    1995-01-01

    A containment and waste package system for processing and shipping tritium xide waste received from a process gas includes an outer drum and an inner drum containing a disposable molecular sieve bed (DMSB) seated within outer drum. The DMSB includes an inlet diffuser assembly, an outlet diffuser assembly, and a hydrogen catalytic recombiner. The DMSB absorbs tritium oxide from the process gas and converts it to a solid form so that the tritium is contained during shipment to a disposal site. The DMSB is filled with type 4A molecular sieve pellets capable of adsorbing up to 1000 curies of tritium. The recombiner contains a sufficient amount of catalyst to cause any hydrogen add oxygen present in the process gas to recombine to form water vapor, which is then adsorbed onto the DMSB.

  15. Balloon gondola diagnostics package

    NASA Technical Reports Server (NTRS)

    Cantor, K. M.

    1986-01-01

    In order to define a new gondola structural specification and to quantify the balloon termination environment, NASA developed a balloon gondola diagnostics package (GDP). This addition to the balloon flight train is comprised of a large array of electronic sensors employed to define the forces and accelerations imposed on a gondola during the termination event. These sensors include the following: a load cell, a three-axis accelerometer, two three-axis rate gyros, two magnetometers, and a two axis inclinometer. A transceiver couple allows the data to be telemetered across any in-line rotator to the gondola-mounted memory system. The GDP is commanded 'ON' just prior to parachute deployment in order to record the entire event.

  16. Tritium waste package

    DOEpatents

    Rossmassler, R.; Ciebiera, L.; Tulipano, F.J.; Vinson, S.; Walters, R.T.

    1995-11-07

    A containment and waste package system for processing and shipping tritium oxide waste received from a process gas includes an outer drum and an inner drum containing a disposable molecular sieve bed (DMSB) seated within the outer drum. The DMSB includes an inlet diffuser assembly, an outlet diffuser assembly, and a hydrogen catalytic recombiner. The DMSB absorbs tritium oxide from the process gas and converts it to a solid form so that the tritium is contained during shipment to a disposal site. The DMSB is filled with type 4A molecular sieve pellets capable of adsorbing up to 1000 curies of tritium. The recombiner contains a sufficient amount of catalyst to cause any hydrogen and oxygen present in the process gas to recombine to form water vapor, which is then adsorbed onto the DMSB. 1 fig.

  17. Meros Preconditioner Package

    2004-04-01

    Meros uses the compositional, aggregation, and overload operator capabilities of TSF to provide an object-oriented package providing segregated/block preconditioners for linear systems related to fully-coupled Navier-Stokes problems. This class of preconditioners exploits the special properties of these problems to segregate the equations and use multi-level preconditioners (through ML) on the matrix sub-blocks. Several preconditioners are provided, including the Fp and BFB preconditioners of Kay & Loghin and Silvester, Elman, Kay & Wathen. The overall performancemore » and scalability of these preconditioners approaches that of multigrid for certain types of problems. Meros also provides more traditional pressure projection methods including SIMPLE and SIMPLEC.« less

  18. Thyra Abstract Interface Package

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilitiesmore » to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Code also currently exists for testing objects and providing composite objects such as product vectors.« less

  19. Chip packaging technique

    NASA Technical Reports Server (NTRS)

    Jayaraj, Kumaraswamy (Inventor); Noll, Thomas E. (Inventor); Lockwood, Harry F. (Inventor)

    2001-01-01

    A hermetically sealed package for at least one semiconductor chip is provided which is formed of a substrate having electrical interconnects thereon to which the semiconductor chips are selectively bonded, and a lid which preferably functions as a heat sink, with a hermetic seal being formed around the chips between the substrate and the heat sink. The substrate is either formed of or includes a layer of a thermoplastic material having low moisture permeability which material is preferably a liquid crystal polymer (LCP) and is a multiaxially oriented LCP material for preferred embodiments. Where the lid is a heat sink, the heat sink is formed of a material having high thermal conductivity and preferably a coefficient of thermal expansion which substantially matches that of the chip. A hermetic bond is formed between the side of each chip opposite that connected to the substrate and the heat sink. The thermal bond between the substrate and the lid/heat sink may be a pinched seal or may be provided, for example by an LCP frame which is hermetically bonded or sealed on one side to the substrate and on the other side to the lid/heat sink. The chips may operate in the RF or microwave bands with suitable interconnects on the substrate and the chips may also include optical components with optical fibers being sealed into the substrate and aligned with corresponding optical components to transmit light in at least one direction. A plurality of packages may be physically and electrically connected together in a stack to form a 3D array.

  20. Electro-Microfluidic Packaging

    SciTech Connect

    BENAVIDES, GILBERT L.; GALAMBOS, PAUL C.

    2002-06-01

    Electro-microfluidics is experiencing explosive growth in new product developments. There are many commercial applications for electro-microfluidic devices such as chemical sensors, biological sensors, and drop ejectors for both printing and chemical analysis. The number of silicon surface micromachined electro-microfluidic products is likely to increase. Manufacturing efficiency and integration of microfluidics with electronics will become important. Surface micromachined microfluidic devices are manufactured with the same tools as IC's (integrated circuits) and their fabrication can be incorporated into the IC fabrication process. In order to realize applications for devices must be developed. An Electro-Microfluidic Dual In-line Package (EMDIP{trademark}) was developed surface micromachined electro-microfluidic devices, a practical method for getting fluid into these to be a standard solution that allows for both the electrical and the fluidic connections needed to operate a great variety of electro-microfluidic devices. The EMDIP{trademark} includes a fan-out manifold that, on one side, mates directly with the 200 micron diameter Bosch etched holes found on the device, and, on the other side, mates to lager 1 mm diameter holes. To minimize cost the EMDIP{trademark} can be injection molded in a great variety of thermoplastics which also serve to optimize fluid compatibility. The EMDIP{trademark} plugs directly into a fluidic printed wiring board using a standard dual in-line package pattern for the electrical connections and having a grid of multiple 1 mm diameter fluidic connections to mate to the underside of the EMDIP{trademark}.

  1. Parallel Total Energy

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  2. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  3. Parallel Multigrid Equation Solver

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  4. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  5. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  6. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  7. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  8. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  9. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  10. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  11. Parallel Dislocation Simulator

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  12. Xyce(™) Parallel Electronic Simulator

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuitmore » network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.« less

  13. Xyce(™) Parallel Electronic Simulator

    SciTech Connect

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuit network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.

  14. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  15. Chemical Energy: A Learning Package.

    ERIC Educational Resources Information Center

    Cohen, Ita; Ben-Zvi, Ruth

    1982-01-01

    A comprehensive teaching/learning chemical energy package was developed to overcome conceptual/experimental difficulties and time required for calculation of enthalpy changes. The package consists of five types of activities occuring in repeated cycles: group activities, laboratory experiments, inquiry questionnaires, teacher-led class…

  16. The Macro - TIPS Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    The TIPS (Teaching Information Processing System) Course Package was designed to be used with the Macro-Games Course Package (SO 011 930) in order to train college students to apply the tools of economic analysis to current problems. TIPS is used to provide feedback and individualized assignments to students, as well as information about the…

  17. Floriculture. Selected Learning Activity Packages.

    ERIC Educational Resources Information Center

    Clemson Univ., SC. Vocational Education Media Center.

    This series of learning activity packages is based on a catalog of performance objectives, criterion-referenced measures, and performance guides for gardening/groundskeeping developed by the Vocational Education Consortium of States (V-TECS). Learning activity packages are presented in four areas: (1) preparation of soils and planting media, (2)…

  18. Oral Hygiene. Learning Activity Package.

    ERIC Educational Resources Information Center

    Hime, Kirsten

    This learning activity package on oral hygiene is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, a list of materials needed, a list of definitions, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics…

  19. Packaging Software Assets for Reuse

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Marshall, J. J.; Downs, R. R.

    2010-12-01

    The reuse of existing software assets such as code, architecture, libraries, and modules in current software and systems development projects can provide many benefits, including reduced costs, in time and effort, and increased reliability. Many reusable assets are currently available in various online catalogs and repositories, usually broken down by disciplines such as programming language (Ibiblio for Maven/Java developers, PyPI for Python developers, CPAN for Perl developers, etc.). The way these assets are packaged for distribution can play a role in their reuse - an asset that is packaged simply and logically is typically easier to understand, install, and use, thereby increasing its reusability. A well-packaged asset has advantages in being more reusable and thus more likely to provide benefits through its reuse. This presentation will discuss various aspects of software asset packaging and how they can affect the reusability of the assets. The characteristics of well-packaged software will be described. A software packaging domain model will be introduced, and some existing packaging approaches examined. An example case study of a Reuse Enablement System (RES), currently being created by near-term Earth science decadal survey missions, will provide information about the use of the domain model. Awareness of these factors will help software developers package their reusable assets so that they can provide the most benefits for software reuse.

  20. Solar water heater design package

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Package describes commercial domestic-hot-water heater with roof or rack mounted solar collectors. System is adjustable to pre-existing gas or electric hot-water house units. Design package includes drawings, description of automatic control logic, evaluation measurements, possible design variations, list of materials and installation tools, and trouble-shooting guide and manual.

  1. Sterility of packaged implant components.

    PubMed

    Worthington, Philip

    2005-01-01

    Several implant components in their original glass vial and peel-back packages were subjected to sterility testing to determine whether the contents remained sterile after the expiration date marked on the package had passed. The results from a university microbiology laboratory showed that the contents remained sterile for 6 to 11 years after the expiration dates. PMID:15973959

  2. Blood Pressure. Learning Activity Package.

    ERIC Educational Resources Information Center

    Hime, Kirsten

    This learning activity package on blood pressure is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, list of materials needed, a list of definitions, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics…

  3. 19 CFR 191.13 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false Packaging materials. 191.13 Section 191.13 Customs... (CONTINUED) DRAWBACK General Provisions § 191.13 Packaging materials. (a) Imported packaging material... packaging material when used to package or repackage merchandise or articles exported or destroyed...

  4. 19 CFR 191.13 - Packaging materials.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 2 2011-04-01 2011-04-01 false Packaging materials. 191.13 Section 191.13 Customs... (CONTINUED) DRAWBACK General Provisions § 191.13 Packaging materials. (a) Imported packaging material... packaging material when used to package or repackage merchandise or articles exported or destroyed...

  5. A portable implementation of ARPACK for distributed memory parallel architectures

    SciTech Connect

    Maschhoff, K.J.; Sorensen, D.C.

    1996-12-31

    ARPACK is a package of Fortran 77 subroutines which implement the Implicitly Restarted Arnoldi Method used for solving large sparse eigenvalue problems. A parallel implementation of ARPACK is presented which is portable across a wide range of distributed memory platforms and requires minimal changes to the serial code. The communication layers used for message passing are the Basic Linear Algebra Communication Subprograms (BLACS) developed for the ScaLAPACK project and Message Passing Interface(MPI).

  6. NOTE: Practical and dosimetric implications of a new type of packaging for radiographic film

    NASA Astrophysics Data System (ADS)

    Gillis, S.; DeWagter, C.

    2005-04-01

    Recently, Kodak introduced new light-tight packages (vacuum packaging, aluminium layer under black polyethylene and different paper) for their oncology films (EDR-2, X-Omat V and PPL-2). In order to avoid additional uncertainty and to ensure transferability of previously published results, we assessed in this study the effect of the old and new packages on the dosimetric response of EDR-2 radiographic film. Therefore, sensitometric measurements were performed for different film assemblies (new envelope + new paper, old envelope + old paper, new envelope without paper and old envelope without paper). In addition, to assess possible effects of the package on the film depth dose response, packaged films were irradiated in parallel geometry, and central depth dose curves were retrieved. For the perpendicular geometry, on the other hand, the effect of the package was assessed at large depth for a high intensity-modulated inverse-pyramid beam. The results of the sensitometric measurements reveal no difference between the packages. However, the white colour of the paper in both the packages induces a dose-dependent increase in optical density (0 0.12) of the film. The depth dose curves show better reproducibility for the new package and the new paper improves the accuracy of film dosimetry, but despite the company's effort to evacuate the air out of the new envelope, it remains necessary to clamp the films in the phantom for the parallel irradiation geometry. At 5 cm depth, the films irradiated in parallel geometry show an under-response of 3 5% compared to films irradiated perpendicularly. Finally, even at locations of large photon scatter, no filtration effect from the aluminium layer incorporated in the new envelope has been observed for perpendicular irradiation geometry.

  7. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  8. Parallel Consensual Neural Networks

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

    1993-01-01

    A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

  9. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  10. Parallel grid population

    SciTech Connect

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  11. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  12. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  13. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  14. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  15. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  16. Collisionless parallel shocks

    NASA Technical Reports Server (NTRS)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  17. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  18. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  19. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  20. In-Package Chemistry Abstraction

    SciTech Connect

    E. Thomas

    2004-11-09

    This report was developed in accordance with the requirements in ''Technical Work Plan for: Regulatory Integration Modeling and Analysis of the Waste Form and Waste Package'' (BSC 2004 [DIRS 171583]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model that uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model that is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed waste packages that contain both high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor that diffuses into the waste package, and (2) seepage water that enters the waste package from the drift as a liquid. (1) Vapor Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H2O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Water Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package. TSPA-LA uses the vapor influx case for the nominal scenario for simulations where the waste package has been

  1. Amesos Solver Package

    SciTech Connect

    Stanley, Vendall S.; Heroux, Michael A.; Hoekstra, Robert J.; Sala, Marzio

    2004-03-01

    Amesos is the Direct Sparse Solver Package in Trilinos. The goal of Amesos is to make AX=S as easy as it sounds, at least for direct methods. Amesos provides interfaces to a number of third party sparse direct solvers, including SuperLU, SuperLU MPI, DSCPACK, UMFPACK and KLU. Amesos provides a common object oriented interface to the best sparse direct solvers in the world. A sparse direct solver solves for x in Ax = b. where A is a matrix and x and b are vectors (or multi-vectors). A sparse direct solver flrst factors A into trinagular matrices L and U such that A = LU via gaussian elimination and then solves LU x = b. Switching amongst solvers in Amesos roquires a change to a single parameter. Yet, no solver needs to be linked it, unless it is used. All conversions between the matrices provided by the user and the format required by the underlying solver is performed by Amesos. As new sparse direct solvers are created, they will be incorporated into Amesos, allowing the user to simpty link with the new solver, change a single parameter in the calling sequence, and use the new solver. Amesos allows users to specify whether the matrix has changed. Amesos can be used anywhere that any sparse direct solver is needed.

  2. Amesos Solver Package

    2004-03-01

    Amesos is the Direct Sparse Solver Package in Trilinos. The goal of Amesos is to make AX=S as easy as it sounds, at least for direct methods. Amesos provides interfaces to a number of third party sparse direct solvers, including SuperLU, SuperLU MPI, DSCPACK, UMFPACK and KLU. Amesos provides a common object oriented interface to the best sparse direct solvers in the world. A sparse direct solver solves for x in Ax = b. wheremore » A is a matrix and x and b are vectors (or multi-vectors). A sparse direct solver flrst factors A into trinagular matrices L and U such that A = LU via gaussian elimination and then solves LU x = b. Switching amongst solvers in Amesos roquires a change to a single parameter. Yet, no solver needs to be linked it, unless it is used. All conversions between the matrices provided by the user and the format required by the underlying solver is performed by Amesos. As new sparse direct solvers are created, they will be incorporated into Amesos, allowing the user to simpty link with the new solver, change a single parameter in the calling sequence, and use the new solver. Amesos allows users to specify whether the matrix has changed. Amesos can be used anywhere that any sparse direct solver is needed.« less

  3. Packaging Considerations for Biopreservation

    PubMed Central

    Woods, Erik J.; Thirumala, Sreedhar

    2011-01-01

    Summary The packaging system chosen for biopreservation is critical for many reasons. An ideal biopreservation container system must provide for closure integrity, sample stability and ready access to the preserved material. This means the system needs to be hermetically sealed to ensure integrity of the specimen is maintained throughout processing, storage and distribution; the system must remain stable over long periods of time as many biobanked samples may be stored indefinitely; and functionally closed access systems must be used to avoid contamination upon sample withdraw. This study reviews the suitability of a new commercially available vial configuration container utilizing blood bag style closure and access systems that can be hermetically sealed and remain stable through cryopreservation and biobanking procedures. This vial based systems allow for current good manufacturing/tissue practice (cGTP) requirements during processing of samples and may provide the benefit of ease of delivery by a care giver. In this study, the CellSeal® closed system cryovial was evaluated and compared to standard screw cap vials. The CellSeal system was evaluated for durability, closure integrity through transportation and maintenance of functional viability of a cryopreserved mesenchymal stem cell model. The results of this initial proof-of-concept study indicated that the CellSeal vials are highly suitable for biopreservation and biobanking, and provide a suitable container system for clinical and commercial cell therapy products frozen in small volumes. PMID:21566715

  4. Laser Welding in Electronic Packaging

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The laser has proven its worth in numerous high reliability electronic packaging applications ranging from medical to missile electronics. In particular, the pulsed YAG laser is an extremely flexible and versatile too] capable of hermetically sealing microelectronics packages containing sensitive components without damaging them. This paper presents an overview of details that must be considered for successful use of laser welding when addressing electronic package sealing. These include; metallurgical considerations such as alloy and plating selection, weld joint configuration, design of optics, use of protective gases and control of thermal distortions. The primary limitations on use of laser welding electronic for packaging applications are economic ones. The laser itself is a relatively costly device when compared to competing welding equipment. Further, the cost of consumables and repairs can be significant. These facts have relegated laser welding to use only where it presents a distinct quality or reliability advantages over other techniques of electronic package sealing. Because of the unique noncontact and low heat inputs characteristics of laser welding, it is an ideal candidate for sealing electronic packages containing MEMS devices (microelectromechanical systems). This paper addresses how the unique advantages of the pulsed YAG laser can be used to simplify MEMS packaging and deliver a product of improved quality.

  5. Naval Waste Package Design Sensitivity

    SciTech Connect

    T. Schmitt

    2006-12-13

    The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license application design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.

  6. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  7. Safety evaluation for packaging (onsite) concrete-lined waste packaging

    SciTech Connect

    Romano, T.

    1997-09-25

    The Pacific Northwest National Laboratory developed a package to ship Type A, non-transuranic, fissile excepted quantities of liquid or solid radioactive material and radioactive mixed waste to the Central Waste Complex for storage on the Hanford Site.

  8. Packaging of solid state devices

    DOEpatents

    Glidden, Steven C.; Sanders, Howard D.

    2006-01-03

    A package for one or more solid state devices in a single module that allows for operation at high voltage, high current, or both high voltage and high current. Low thermal resistance between the solid state devices and an exterior of the package and matched coefficient of thermal expansion between the solid state devices and the materials used in packaging enables high power operation. The solid state devices are soldered between two layers of ceramic with metal traces that interconnect the devices and external contacts. This approach provides a simple method for assembling and encapsulating high power solid state devices.

  9. Microelectronics packaging research directions for aerospace applications

    NASA Technical Reports Server (NTRS)

    Galbraith, L.

    2003-01-01

    The Roadmap begins with an assessment of needs from the microelectronics for aerospace applications viewpoint. Needs Assessment is divided into materials, packaging components, and radiation characterization of packaging.

  10. Optimising a parallel conjugate gradient solver

    SciTech Connect

    Field, M.R.

    1996-12-31

    This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

  11. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  12. Tough2{_}MP: A parallel version of TOUGH2

    SciTech Connect

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2003-04-09

    TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. In addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.

  13. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  14. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  15. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  16. New Packaging for Amplifier Slabs

    SciTech Connect

    Riley, M.; Thorsness, C.; Suratwala, T.; Steele, R.; Rogowski, G.

    2015-03-18

    The following memo provides a discussion and detailed procedure for a new finished amplifier slab shipping and storage container. The new package is designed to maintain an environment of <5% RH to minimize weathering.

  17. Spack: the Supercomputing Package Manager

    SciTech Connect

    Gamblin, T.

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstacle deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.

  18. High Frequency Electronic Packaging Technology

    NASA Technical Reports Server (NTRS)

    Herman, M.; Lowry, L.; Lee, K.; Kolawa, E.; Tulintseff, A.; Shalkhauser, K.; Whitaker, J.; Piket-May, M.

    1994-01-01

    Commercial and government communication, radar, and information systems face the challenge of cost and mass reduction via the application of advanced packaging technology. A majority of both government and industry support has been focused on low frequency digital electronics.

  19. Spack: the Supercomputing Package Manager

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstaclemore » deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.« less

  20. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  1. QCMPI: A parallel environment for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    QCMPI is a quantum computer (QC) simulation package written in Fortran 90 with parallel processing capabilities. It is an accessible research tool that permits rapid evaluation of quantum algorithms for a large number of qubits and for various "noise" scenarios. The prime motivation for developing QCMPI is to facilitate numerical examination of not only how QC algorithms work, but also to include noise, decoherence, and attenuation effects and to evaluate the efficacy of error correction schemes. The present work builds on an earlier Mathematica code QDENSITY, which is mainly a pedagogic tool. In that earlier work, although the density matrix formulation was featured, the description using state vectors was also provided. In QCMPI, the stress is on state vectors, in order to employ a large number of qubits. The parallel processing feature is implemented by using the Message-Passing Interface (MPI) protocol. A description of how to spread the wave function components over many processors is provided, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors. These operators include the standard Pauli, Hadamard, CNOT and CPHASE gates and also Quantum Fourier transformation. These operators make up the actions needed in QC. Codes for Grover's search and Shor's factoring algorithms are provided as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to alternate noise effects, which corresponds to the idea of solving a stochastic Schrödinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its eigenvalues and associated entropy. Potential applications of this powerful tool include studies of the stability and correction of QC processes using Hamiltonian based dynamics. Program summaryProgram title: QCMPI Catalogue identifier: AECS_v1_0 Program summary URL

  2. Parallelized nested sampling

    NASA Astrophysics Data System (ADS)

    Henderson, R. Wesley; Goggans, Paul M.

    2014-12-01

    One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

  3. Packaging Review Guide for Reviewing Safety Analysis Reports for Packagings

    SciTech Connect

    DiSabatino, A; Biswas, D; DeMicco, M; Fisher, L E; Hafner, R; Haslam, J; Mok, G; Patel, C; Russell, E

    2007-04-12

    This Packaging Review Guide (PRG) provides guidance for Department of Energy (DOE) review and approval of packagings to transport fissile and Type B quantities of radioactive material. It fulfills, in part, the requirements of DOE Order 460.1B for the Headquarters Certifying Official to establish standards and to provide guidance for the preparation of Safety Analysis Reports for Packagings (SARPs). This PRG is intended for use by the Headquarters Certifying Official and his or her review staff, DOE Secretarial offices, operations/field offices, and applicants for DOE packaging approval. This PRG is generally organized at the section level in a format similar to that recommended in Regulatory Guide 7.9 (RG 7.9). One notable exception is the addition of Section 9 (Quality Assurance), which is not included as a separate chapter in RG 7.9. Within each section, this PRG addresses the technical and regulatory bases for the review, the manner in which the review is accomplished, and findings that are generally applicable for a package that meets the approval standards. This Packaging Review Guide (PRG) provides guidance for DOE review and approval of packagings to transport fissile and Type B quantities of radioactive material. It fulfills, in part, the requirements of DOE O 460.1B for the Headquarters Certifying Official to establish standards and to provide guidance for the preparation of Safety Analysis Reports for Packagings (SARPs). This PRG is intended for use by the Headquarters Certifying Official and his review staff, DOE Secretarial offices, operations/field offices, and applicants for DOE packaging approval. The primary objectives of this PRG are to: (1) Summarize the regulatory requirements for package approval; (2) Describe the technical review procedures by which DOE determines that these requirements have been satisfied; (3) Establish and maintain the quality and uniformity of reviews; (4) Define the base from which to evaluate proposed changes in scope

  4. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  5. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  6. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  7. Parallel Kinematic Machines (PKM)

    SciTech Connect

    Henry, R.S.

    2000-03-17

    The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

  8. Using the Parallel Virtual Machine for Everyday Analysis

    NASA Astrophysics Data System (ADS)

    Noble, M. S.; Houck, J. C.; Davis, J. E.; Young, A.; Nowak, M.

    2006-07-01

    A review of the literature reveals that while parallel computing is sometimes employed by astronomers for custom, large-scale calculations, no package fosters the routine application of parallel methods to standard problems in astronomical data analysis. This paper describes our attempt to close that gap by wrapping the Parallel Virtual Machine (PVM) as a scriptable S-Lang module. Using PVM within ISIS, the Interactive Spectral Interpretation System, we've distributed a number of representive calculations over a network of 25+ CPUs to achieve dramatic reductions in execution times. We discuss how the approach applies to a wide class of modeling problems, outline our efforts to make it more transparent for common use, and note its growing importance in the context of the large, multi-wavelength datasets used in modern analysis.

  9. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  10. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  11. Watermarking spot colors in packaging

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang

    2015-03-01

    In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.

  12. TRU waste transportation package development

    SciTech Connect

    Eakes, R. G.; Lamoreaux, G. H.; Romesberg, L. E.; Sutherland, S. H.; Duffey, T. A.

    1980-01-01

    Inventories of the transuranic wastes buried or stored at various US DOE sites are tabulated. The leading conceptual design of Type-B packaging for contact-handled transuranic waste is the Transuranic Package Transporter (TRUPACT), a large metal container comprising inner and outer tubular steel frameworks which are separated by rigid polyurethane foam and sheathed with steel plate. Testing of TRUPACT is reported. The schedule for its development is given. 6 figures. (DLC)

  13. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  14. Performance analysis of large scale parallel CFD computing based on Code_Saturne

    NASA Astrophysics Data System (ADS)

    Shang, Zhi

    2013-02-01

    In order to run computational fluid dynamics (CFD) codes on large scales, parallel computing has to be employed. For instance, on Petascale computing, general parallel computing without any optimization is not enough, especially for complex industrial issues that employ a large number of mesh cells to capture the details of the geometry. How to distribute these mesh cells among the multi-processors for Terascale and Petascale systems to obtain a good performance on parallel computing is really a challenge. Some mesh partitioning software packages, such as Metis, ParMetis, PT-Scotch and Zoltan, were chosen as the candidates ported into Code_Saturne to test if they can lead Code_Saturne towards Petascale and Exascale parallel CFD computing. Through the studies, it was found that mesh partitioning optimization software packages based on the graph mesh partitioning method can help the CFD code obtain good mesh distributions for high performance computing (HPC).

  15. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  16. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  17. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  18. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  19. Rapid Active Sampling Package

    NASA Technical Reports Server (NTRS)

    Peters, Gregory

    2010-01-01

    A field-deployable, battery-powered Rapid Active Sampling Package (RASP), originally designed for sampling strong materials during lunar and planetary missions, shows strong utility for terrestrial geological use. The technology is proving to be simple and effective for sampling and processing materials of strength. Although this originally was intended for planetary and lunar applications, the RASP is very useful as a powered hand tool for geologists and the mining industry to quickly sample and process rocks in the field on Earth. The RASP allows geologists to surgically acquire samples of rock for later laboratory analysis. This tool, roughly the size of a wrench, allows the user to cut away swaths of weathering rinds, revealing pristine rock surfaces for observation and subsequent sampling with the same tool. RASPing deeper (.3.5 cm) exposes single rock strata in-situ. Where a geologist fs hammer can only expose unweathered layers of rock, the RASP can do the same, and then has the added ability to capture and process samples into powder with particle sizes less than 150 microns, making it easier for XRD/XRF (x-ray diffraction/x-ray fluorescence). The tool uses a rotating rasp bit (or two counter-rotating bits) that resides inside or above the catch container. The container has an open slot to allow the bit to extend outside the container and to allow cuttings to enter and be caught. When the slot and rasp bit are in contact with a substrate, the bit is plunged into it in a matter of seconds to reach pristine rock. A user in the field may sample a rock multiple times at multiple depths in minutes, instead of having to cut out huge, heavy rock samples for transport back to a lab for analysis. Because of the speed and accuracy of the RASP, hundreds of samples can be taken in one day. RASP-acquired samples are small and easily carried. A user can characterize more area in less time than by using conventional methods. The field-deployable RASP used a Ni

  20. Parallel implementation of a unified approach to image focus and defocus analysis on the Parallel Virtual Machine

    NASA Astrophysics Data System (ADS)

    Liu, Yen-Fu; Lo, Nai-Wei; Subbarao, Murali; Carlson, Bradley S.

    1998-07-01

    A unified approach to image focus and defocus analysis (UFDA) was proposed recently for three-dimensional shape and focused image recovery of objects. One version of this approach which yields very accurate results is highly computationally intensive. In this paper we present a parallel implementation of this version of UFDA on the Parallel Virtual Machine (PVM). One of the most computationally intensive parts of the UFDA approach is the estimation of image data that would be recorded by a camera for a given solution for 3D shape and focused image. This computational step has to be repeated once during each iteration of the optimization algorithm. Therefore this step has been sped up by using the Parallel Virtual Machine (PVM). PVM is a software package that allows a heterogeneous network of parallel and serial computers to appear as a single concurrent computational resource. In our experimental environment PVM is installed on four UNIX workstations communicating over Ethernet to exploit parallel processing capability. Experimental results show that the communication over-head in this case is relatively low. An average of 1.92 speedup is attained by the parallel UFDA algorithm running on 2 PVM connected computers compared to the execution time of sequential processing. By applying the UFDA algorithm on 4 PVM connected machines an average of 3.44 speedup is reached. This demonstrates a practical application of PVM to 3D machine vision.

  1. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  2. Prevention policies addressing packaging and packaging waste: Some emerging trends.

    PubMed

    Tencati, Antonio; Pogutz, Stefano; Moda, Beatrice; Brambilla, Matteo; Cacia, Claudia

    2016-10-01

    Packaging waste is a major issue in several countries. Representing in industrialized countries around 30-35% of municipal solid waste yearly generated, this waste stream has steadily grown over the years even if, especially in Europe, specific recycling and recovery targets have been fixed. Therefore, an increasing attention starts to be devoted to prevention measures and interventions. Filling a gap in the current literature, this explorative paper is a first attempt to map the increasingly important phenomenon of prevention policies in the packaging sector. Through a theoretical sampling, 11 countries/states (7 in and 4 outside Europe) have been selected and analyzed by gathering and studying primary and secondary data. Results show evidence of three specific trends in packaging waste prevention policies: fostering the adoption of measures directed at improving packaging design and production through an extensive use of the life cycle assessment; raising the awareness of final consumers by increasing the accountability of firms; promoting collaborative efforts along the packaging supply chains. PMID:27372152

  3. Prevention policies addressing packaging and packaging waste: Some emerging trends.

    PubMed

    Tencati, Antonio; Pogutz, Stefano; Moda, Beatrice; Brambilla, Matteo; Cacia, Claudia

    2016-10-01

    Packaging waste is a major issue in several countries. Representing in industrialized countries around 30-35% of municipal solid waste yearly generated, this waste stream has steadily grown over the years even if, especially in Europe, specific recycling and recovery targets have been fixed. Therefore, an increasing attention starts to be devoted to prevention measures and interventions. Filling a gap in the current literature, this explorative paper is a first attempt to map the increasingly important phenomenon of prevention policies in the packaging sector. Through a theoretical sampling, 11 countries/states (7 in and 4 outside Europe) have been selected and analyzed by gathering and studying primary and secondary data. Results show evidence of three specific trends in packaging waste prevention policies: fostering the adoption of measures directed at improving packaging design and production through an extensive use of the life cycle assessment; raising the awareness of final consumers by increasing the accountability of firms; promoting collaborative efforts along the packaging supply chains.

  4. 49 CFR 173.29 - Empty packagings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Empty packagings. 173.29 Section 173.29... SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.29 Empty packagings. (a) General. Except as otherwise provided in this section, an empty packaging containing only the residue of...

  5. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Unit packaging. 157.27 Section 157.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide...

  6. 7 CFR 58.626 - Packaging equipment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Packaging equipment. 58.626 Section 58.626 Agriculture....626 Packaging equipment. Packaging equipment designed to mechanically fill and close single service... Standards for Equipment for Packaging Frozen Desserts and Cottage Cheese. Quality Specifications for...

  7. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Detonators containing no more than 1 g explosive (excluding ignition and delay charges) that are electric... excepted from the packaging requirements of § 173.62: (1) No more than 50 detonators in one inner packaging... outer packaging; (3) No more than 1000 detonators in one outer packaging; and (4) No material may...

  8. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... packagings. (c)-(e) (f) Detonators containing no more than 1 g explosive (excluding ignition and delay... which case they are excepted from the packaging requirements of § 173.62: (1) No more than 50 detonators... compartment is used as the outer packaging; (3) No more than 1000 detonators in one outer packaging; and...

  9. Think INSIDE the Box: Package Engineering

    ERIC Educational Resources Information Center

    Snyder, Mark; Painter, Donna

    2014-01-01

    Most products people purchase, keep in their homes, and often discard, are typically packaged in some way. Packaging is so prevalent in daily lives that many of take it for granted. That is by design-the expectation of good packaging is that it exists for the sake of the product. The primary purposes of any package (to contain, inform, display,…

  10. 49 CFR 173.411 - Industrial packagings.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... record retention applicable to Industrial Packaging Type 1 (IP-1), Industrial Packaging Type 2 (IP-2), and Industrial Packaging Type 3 (IP-3). (b) Industrial packaging certification and tests. (1) Each IP-1 must meet the general design requirements prescribed in § 173.410. (2) Each IP-2 must meet...

  11. 27 CFR 19.276 - Package scales.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... Scales used to weigh packages designed to hold 10 wine gallons or less shall indicate weight in ounces or... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Package scales. 19.276... Package scales. Proprietors shall ensure the accuracy of scales used for weighing packages of...

  12. 10 CFR 71.35 - Package evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Package evaluation. 71.35 Section 71.35 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Application for Package Approval § 71.35 Package evaluation. The application must include the following: (a)...

  13. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Unit packaging. 157.27 Section 157.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide...

  14. 49 CFR 173.29 - Empty packagings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Empty packagings. 173.29 Section 173.29... SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.29 Empty packagings. (a) General. Except as otherwise provided in this section, an empty packaging containing only the residue of...

  15. 7 CFR 58.626 - Packaging equipment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Packaging equipment. 58.626 Section 58.626 Agriculture....626 Packaging equipment. Packaging equipment designed to mechanically fill and close single service... Standards for Equipment for Packaging Frozen Desserts and Cottage Cheese. Quality Specifications for...

  16. 10 CFR 71.35 - Package evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Package evaluation. 71.35 Section 71.35 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Application for Package Approval § 71.35 Package evaluation. The application must include the following: (a)...

  17. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  18. Optical-I/O packaging technologies for chip- and board-level optical interconnects

    NASA Astrophysics Data System (ADS)

    Ishii, Yuzo

    2002-09-01

    As silicon LSIs become more densely integrated and the bit rates between them increase, traditional printed circuit boards present problems similar to electrical cables, which are already a bottleneck to cabinet interconnection. These problems include limited line length, poor flexibility in layout design, power dissipation, and crosstalk (EMI). One promising solution to these on-board electrical interconnection bottlenecks is to use optical interconnection technology at the chip level. Making chip-level optical interconnections practical requires revolutionary changes in optoelectronics packaging; the packaging must not only keep pace with silicon LSI capabilities but should also inherit the advantages of today?fs mature electronics packaging and manufacturability technologies. To meet these requirements, we developed novel optical-I/O packages for low-cost chip-to-chip optical interconnections. Unlike conventional optical packages, our packages are fully compatible with surface-mount technology (SMT) because they do not need optical connectors. The packages free users from all on-board connection work and fiber management. Using the developed optical-I/O packages, in which a VCSEL/PD array is mounted at the bottom, chip-to-chip parallel optical interconnection through a polymer waveguide array is demonstrated.

  19. Method of forming a package for mems-based fuel cell

    DOEpatents

    Morse, Jeffrey D.; Jankowski, Alan F.

    2004-11-23

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMOS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  20. Method of forming a package for MEMS-based fuel cell

    SciTech Connect

    Morse, Jeffrey D; Jankowski, Alan F

    2013-05-21

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  1. Green Packaging Management of Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Zhang, Guirong; Zhao, Zongjian

    From the connotation of green logistics management, we discuss the principles of green packaging, and from the two levels of government and enterprises, we put forward a specific management strategy. The management of green packaging can be directly and indirectly promoted by laws, regulations, taxation, institutional and other measures. The government can also promote new investment to the development of green packaging materials, and establish specialized institutions to identify new packaging materials, standardization of packaging must also be accomplished through the power of the government. Business units of large scale through the packaging and container-based to reduce the use of packaging materials, develop and use green packaging materials and easy recycling packaging materials for proper packaging.

  2. Material efficiency in Dutch packaging policy.

    PubMed

    Worrell, Ernst; van Sluisveld, Mariësse A E

    2013-03-13

    Packaging materials are one of the largest contributors to municipal solid waste generation. In this paper, we evaluate the material impacts of packaging policy in The Netherlands, focusing on the role of material efficiency (or waste prevention). Since 1991, five different policies have been implemented to reduce the environmental impact of packaging. The analysis shows that Dutch packaging policies helped to reduce the total packaging volume until 1999. After 2000, packaging consumption increased more rapidly than the baseline, suggesting that policy measures were not effective. Generally, we see limited attention to material efficiency to reduce packaging material use. For this purpose, we tried to gain more insight in recent activities on material efficiency, by building a database of packaging prevention initiatives. We identified 131 alterations to packaging implemented in the period 2005-2010, of which weight reduction was the predominant approach. More appropriate packaging policy is needed to increase the effectiveness of policies, with special attention to material efficiency. PMID:23359741

  3. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  4. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  5. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  6. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  7. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  8. Unified Parallel Software

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  9. Vibration of the Package of Rods Linked by Spacer Grids

    NASA Astrophysics Data System (ADS)

    Zeman, V.; Hlaváč, Z.

    This paper deals with modelling and vibration analysis of the large package of identical parallel rods which are linked by transverse springs (spacer grids) placed on several level spacings. The vibration of rods is caused by the support plate motion. The rod discretization by FEM is based on Rayleigh beam theory. With respect to cyclic and central package rod symmetry, the system is decomposed to identical revolved rod segments. The modal synthesis method with condensation of the rod segments is used for modelling and determination of steady forced vibration of the whole system. The presented method is the first step to modelling of the nuclear fuel assembly vibration caused by kinematical excitation determined by motion of the support plates which are part of the reactor core.

  10. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  11. Reference waste package environment report

    SciTech Connect

    Glassley, W.E.

    1986-10-01

    One of three candidate repository sites for high-level radioactive waste packages is located at Yucca Mountain, Nevada, in rhyolitic tuff 700 to 1400 ft above the static water table. Calculations indicate that the package environment will experience a maximum temperature of {similar_to}230{sup 0}C at 9 years after emplacement. For the next 300 years the rock within 1 m of the waste packages will remain dehydrated. Preliminary results suggest that the waste package radiation field will have very little effect on the mechanical properties of the rock. Radiolysis products will have a negligible effect on the rock even after rehydration. Unfractured specimens of repository rock show no change in hydrologic characteristics during repeated dehydration-rehydration cycles. Fractured samples with initially high permeabilities show a striking permeability decrease during dehydration-rehydration cycling, which may be due to fracture healing via deposition of silica. Rock-water interaction studies demonstrate low and benign levels of anions and most cations. The development of sorptive secondary phases such as zeolites and clays suggests that anticipated rock-water interaction may produce beneficial changes in the package environment.

  12. Capillary-driven automatic packaging.

    PubMed

    Ding, Yuzhe; Hong, Lingfei; Nie, Baoqing; Lam, Kit S; Pan, Tingrui

    2011-04-21

    Packaging continues to be one of the most challenging steps in micro-nanofabrication, as many emerging techniques (e.g., soft lithography) are incompatible with the standard high-precision alignment and bonding equipment. In this paper, we present a simple-to-operate, easy-to-adapt packaging strategy, referred to as Capillary-driven Automatic Packaging (CAP), to achieve automatic packaging process, including the desired features of spontaneous alignment and bonding, wide applicability to various materials, potential scalability, and direct incorporation in the layout. Specifically, self-alignment and self-engagement of the CAP process induced by the interfacial capillary interactions between a liquid capillary bridge and the top and bottom substrates have been experimentally characterized and theoretically analyzed with scalable implications. High-precision alignment (of less than 10 µm) and outstanding bonding performance (up to 300 kPa) has been reliably obtained. In addition, a 3D microfluidic network, aligned and bonded by the CAP technique, has been devised to demonstrate the applicability of this facile yet robust packaging technique for emerging microfluidic and bioengineering applications.

  13. The reduction of packaging waste

    SciTech Connect

    Raney, E.A.; Hogan, J.J.; McCollom, M.L.; Meyer, R.J.

    1994-04-01

    Nationwide, packaging waste comprises approximately one-third of the waste disposed in sanitary landfills. the US Department of Energy (DOE) generated close to 90,000 metric tons of sanitary waste. With roughly one-third of that being packaging waste, approximately 30,000 metric tons are generated per year. The purpose of the Reduction of Packaging Waste project was to investigate opportunities to reduce this packaging waste through source reduction and recycling. The project was divided into three areas: procurement, onsite packaging and distribution, and recycling. Waste minimization opportunities were identified and investigated within each area, several of which were chosen for further study and small-scale testing at the Hanford Site. Test results, were compiled into five ``how-to`` recipes for implementation at other sites. The subject of the recipes are as follows: (1) Vendor Participation Program; (2) Reusable Containers System; (3) Shrink-wrap System -- Plastic and Corrugated Cardboard Waste Reduction; (4) Cardboard Recycling ; and (5) Wood Recycling.

  14. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  15. Review: nanocomposites in food packaging.

    PubMed

    Arora, Amit; Padua, G W

    2010-01-01

    The development of nanocomposites is a new strategy to improve physical properties of polymers, including mechanical strength, thermal stability, and gas barrier properties. The most promising nanoscale size fillers are montmorillonite and kaolinite clays. Graphite nanoplates are currently under study. In food packaging, a major emphasis is on the development of high barrier properties against the migration of oxygen, carbon dioxide, flavor compounds, and water vapor. Decreasing water vapor permeability is a critical issue in the development of biopolymers as sustainable packaging materials. The nanoscale plate morphology of clays and other fillers promotes the development of gas barrier properties. Several examples are cited. Challenges remain in increasing the compatibility between clays and polymers and reaching complete dispersion of nanoplates. Nanocomposites may advance the utilization of biopolymers in food packaging. PMID:20492194

  16. Truss Performance and Packaging Metrics

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M.; Collins, Timothy J.; Doggett, William; Dorsey, John; Watson, Judith

    2006-01-01

    In the present paper a set of performance metrics are derived from first principals to assess the efficiency of competing space truss structural concepts in terms of mass, stiffness, and strength, for designs that are constrained by packaging. The use of these performance metrics provides unique insight into the primary drivers for lowering structural mass and packaging volume as well as enabling quantitative concept performance evaluation and comparison. To demonstrate the use of these performance metrics, data for existing structural concepts are plotted and discussed. Structural performance data is presented for various mechanical deployable concepts, for erectable structures, and for rigidizable structures.

  17. GNS-12 Packaging design criteria

    SciTech Connect

    Clements, E.P., Westinghouse Hanford

    1996-07-24

    The purpose of this Packaging Design Criteria (PDC) is to provide criteria for the Safety Analysis Report for Packaging (SARP)(Onsite). The SARP provides the evaluation to demonstrate that the onsite transportation safety criteria are met for the transport and storage of the 324 Building vitrified encapsulated material in the GNS-12 cask. In this application, the approved PDC provides a formal set of standards for the payload requirements, and guidance for the current cask transport configuration and a revised storage seal and primary lid modification design.

  18. An Arbitrary Precision Computation Package

    2003-06-14

    This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

  19. 49 CFR 173.24a - Additional general requirements for non-bulk packagings and packages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS SHIPPERS-GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for... packaging containing liquid hazardous materials must be packed so that closures on inner packagings...

  20. 49 CFR 173.24a - Additional general requirements for non-bulk packagings and packages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS SHIPPERS-GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for... packaging containing liquid hazardous materials must be packed so that closures on inner packagings...

  1. Life Management Skills. Teacher's Guide [and Student Workbook]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Goldstein, Jeren; Walford, Sylvia

    This teacher's guide and student workbook are part of a series of supplementary curriculum packages presenting alternative methods and activities designed to meet the needs of Florida secondary students with mild disabilities or other special learning needs. The Life Management Skills PASS (Parallel Alternative Strategies for Students) teacher's…

  2. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    SciTech Connect

    Koniges, A.

    1996-02-09

    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  3. Introduction to Computers: Parallel Alternative Strategies for Students. Course No. 0200000.

    ERIC Educational Resources Information Center

    Chauvenne, Sherry; And Others

    Parallel Alternative Strategies for Students (PASS) is a content-centered package of alternative methods and materials designed to assist secondary teachers to meet the needs of mainstreamed learning-disabled and emotionally-handicapped students of various achievement levels in the basic education content courses. This supplementary text and…

  4. Efficient parallel simulation of CO2 geologic sequestration insaline aquifers

    SciTech Connect

    Zhang, Keni; Doughty, Christine; Wu, Yu-Shu; Pruess, Karsten

    2007-01-01

    An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The new parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.

  5. Design and performance of a scalable, parallel statistics toolkit.

    SciTech Connect

    Thompson, David C.; Bennett, Janine Camille; Pebay, Philippe Pierre

    2010-11-01

    Most statistical software packages implement a broad range of techniques but do so in an ad hoc fashion, leaving users who do not have a broad knowledge of statistics at a disadvantage since they may not understand all the implications of a given analysis or how to test the validity of results. These packages are also largely serial in nature, or target multicore architectures instead of distributed-memory systems, or provide only a small number of statistics in parallel. This paper surveys a collection of parallel implementations of statistics algorithm developed as part of a common framework over the last 3 years. The framework strategically groups modeling techniques with associated verification and validation techniques to make the underlying assumptions of the statistics more clear. Furthermore it employs a design pattern specifically targeted for distributed-memory parallelism, where architectural advances in large-scale high-performance computing have been focused. Moment-based statistics (which include descriptive, correlative, and multicorrelative statistics, principal component analysis (PCA), and k-means statistics) scale nearly linearly with the data set size and number of processes. Entropy-based statistics (which include order and contingency statistics) do not scale well when the data in question is continuous or quasi-diffuse but do scale well when the data is discrete and compact. We confirm and extend our earlier results by now establishing near-optimal scalability with up to 10,000 processes.

  6. Package for integrated optic circuit and method

    DOEpatents

    Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.

    1998-08-04

    A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.

  7. Package for integrated optic circuit and method

    DOEpatents

    Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.

    1998-01-01

    A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.

  8. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  9. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  10. Parallel processing of atmospheric chemistry calculations: Preliminary considerations

    SciTech Connect

    Elliott, S.; Jones, P.

    1995-01-01

    Global climate calculations are already saturating the class modern vector supercomputers with only a few central processing units. Increased resolution and inclusion of routines to deal with biogeochemical portions of the terrestrial climate system will soon demand massively parallel approaches. The atmospheric photochemistry ensemble is intimately linked to climate through the trace greenhouse gases ozone and methane and modules for representing it are being attached to global three dimensional transport and GCM frameworks. Atmospheric kinetics involve dozens of highly interactive tracers and so will accentuate the need for parallel processing of earth system simulations. In the present text we lay some of the groundwork for addition of atmospheric kinetics packages to GCM and global scale atmospheric models on multiply parallel computers. The discussion is tailored for consumption by the photochemical modelling community. After a review of numerical atmospheric chemistry methods, we examine how kinetics can be implemented on a parallel computer. We concentrate especially on data layout and flexibility and how these can be implemented in various programming models. We conclude that chemistry can be implemented rather easily within existing frameworks of several parallel atmospheric models. However, memory limitations may preclude high resolution studies of global chemistry.

  11. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  12. Determination of activation energy of pyrolysis of carton packaging wastes and its pure components using thermogravimetry.

    PubMed

    Alvarenga, Larissa M; Xavier, Thiago P; Barrozo, Marcos Antonio S; Bacelos, Marcelo S; Lira, Taisa S

    2016-07-01

    Many processes have been used for recycling of carton packaging wastes. The pyrolysis highlights as a promising technology to be used for recovering the aluminum from polyethylene and generating products with high heating value. In this paper, a study on pyrolysis reactions of carton packaging wastes and its pure components was performed in order to estimate the kinetic parameters of these reactions. For this, dynamic thermogravimetric analyses were carried out and two different kinds of kinetic models were used: the isoconversional and Independent Parallel Reactions. Isoconversional models allowed to calculate the overall activation energy of the pyrolysis reaction, in according to their conversions. The IPR model, in turn, allowed the calculation of kinetic parameters of each one of the carton packaging and paperboard subcomponents. The carton packaging pyrolysis follows three separated stages of devolatilization. The first step is moisture loss. The second stage is perfectly correlated to devolatilization of cardboard. The third step is correlated to devolatilization of polyethylene.

  13. Small planar packaging system for high-throughput ATM switching systems

    NASA Astrophysics Data System (ADS)

    Kishimoto, T.; Yasuda, K.; Oka, H.; Kaneko, Y.; Kawauchi, M.

    1995-03-01

    A small planar packaging (SPP) system is described that can be combined with card-on-board (COB) packaging in ATM switching systems with throughputs of over 40 Gbit/s. Using a newly developed quasicoaxial zero-insertion-force connector, point-to-point 311 Mbit/s of 8 bit parallel signal transmission is achieved in an arbitrary location on the SPP system's shelf. Also 5400 I/O connections in the region of the planar packaging system are made, and thus the SPP system eliminates the I/O pin count limitation. Furthermore, the heat flux of the SPP system is five times higher than that of conventional COB packaging because of its air flow control structure.

  14. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  15. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  16. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  17. Stochastic PArallel Rarefied-gas Time-accurate Analyzer

    SciTech Connect

    Michael Gallis, Steve Plimpton

    2014-01-24

    The SPARTA package is software for simulating low-density fluids via the Direct Simulation Monte Carlo (DSMC) method, which is a particle-based method for tracking particle trajectories and collisions as a model of a multi-species gas. The main component of SPARTA is a simulation code which allows the user to specify a simulation domain, populate it with particles, embed triangulated surfaces as boundary conditions for the flow, overlay a grid for finding pairs of collision partners, and evolve the system in time via explicit timestepping. The package also includes various pre- and post-processing tools, useful for setting up simulations and analyzing the results. The simulation code runs either in serial on a single processor or desktop machine, or can be run in parallel using the MPI message-passing library, to enable faster performance on large problems.

  18. Domain decomposition methods for a parallel Monte Carlo transport code

    SciTech Connect

    Alme, H J; Rodrigue, G H; Zimmerman, G B

    1999-01-27

    Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.

  19. Stochastic PArallel Rarefied-gas Time-accurate Analyzer

    2014-01-24

    The SPARTA package is software for simulating low-density fluids via the Direct Simulation Monte Carlo (DSMC) method, which is a particle-based method for tracking particle trajectories and collisions as a model of a multi-species gas. The main component of SPARTA is a simulation code which allows the user to specify a simulation domain, populate it with particles, embed triangulated surfaces as boundary conditions for the flow, overlay a grid for finding pairs of collision partners,more » and evolve the system in time via explicit timestepping. The package also includes various pre- and post-processing tools, useful for setting up simulations and analyzing the results. The simulation code runs either in serial on a single processor or desktop machine, or can be run in parallel using the MPI message-passing library, to enable faster performance on large problems.« less

  20. RAGG - R EPISODIC AGGREGATION PACKAGE

    EPA Science Inventory

    The RAGG package is an R implementation of the CMAQ episodic model aggregation method developed by Constella Group and the Environmental Protection Agency. RAGG is a tool to provide climatological seasonal and annual deposition of sulphur and nitrogen for multimedia management. ...

  1. Comparison of different LED Packages

    NASA Astrophysics Data System (ADS)

    Dieker, Henning; Miesner, Christian; Püttjer, Dirk; Bachl, Bernhard

    2007-09-01

    In this paper different technologies for LED packaging are compared, focusing on Chip on Board (COB) and SMD technology. The package technology which is used depends on the LED application. A critical fact in LED technology is the thermal management, especially for high brightness LED applications because the thermal management is important for reliability, lifetime and electrooptical performance of the LED module. To design certain and long life LED applications knowledge of the heat flow from LEDs to the complete application is required. High sophisticated FEM simulations are indispensable for modern development of high power LED applications. We compare simulations of various substrate materials and packaging technologies simulated using FLOTHERM software. Thereby different substrates such as standard FR4, ceramic and metal core printed circuit boards are considered. For the verification of the simulated results and the testing of manufactured modules, advanced measurement tools are required. We show different ways to experimentally characterize the thermal behavior of LED modules. The thermal path is determined by the transient thermal analysis using the MicReD T3Ster analyzer. Afterwards it will be compared to the conventional method using thermocouples. The heat distribution over the module is investigated by an IR-Camera. We demonstrate and compare simulation and measurement results of Chip-on-Board (COB) and Sub-Mounted Devices (SMD) technology. The results reveal that for different applications certain packages are ideal.

  2. The Macro - Games Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    Part of an Economic Education Series, the course package is designed to teach basic concepts and fundamental principles of macroeconomics and how they can be applied to various world problems. For use with college students, learning is gained through lectures, discussion, simulation games, programmed learning, and text. Time allotment is a 15-week…

  3. Improved switch-resistor packaging

    NASA Technical Reports Server (NTRS)

    Redmerski, R. E.

    1980-01-01

    Packaging approach makes resistors more accessible and easily identified with specific switches. Failures are repaired more quickly because of improved accessibility. Typical board includes one resistor that acts as circuit breaker, and others are positioned so that their values can be easily measured when switch is operated. Approach saves weight by using less wire and saves valuable panel space.

  4. ULFEM time series analysis package

    USGS Publications Warehouse

    Karl, Susan M.; McPhee, Darcy K.; Glen, Jonathan M. G.; Klemperer, Simon L.

    2013-01-01

    This manual describes how to use the Ultra-Low-Frequency ElectroMagnetic (ULFEM) software package. Casual users can read the quick-start guide and will probably not need any more information than this. For users who may wish to modify the code, we provide further description of the routines.

  5. Electronic Spreadsheet Packages for Microcomputers.

    ERIC Educational Resources Information Center

    Gibson, Larry M.

    1984-01-01

    Describes capabilities and advantages of spreadsheet software, including its ability to perform "what-if" analysis quickly and easily. Also noted are additional advantages; applications, including use in the library environment; history and development; new-generation spreadsheets; and enhanced packaging in the near future. A spreadsheet software…

  6. DATACUBE: A datacube manipulation package

    NASA Astrophysics Data System (ADS)

    Allan, Alasdair; Currie, Malcolm J.

    2014-05-01

    DATACUBE is a command-line package for manipulating and visualizing data cubes. It was designed for integral field spectroscopy but has been extended to be a generic data cube tool, used in particular for sub-millimeter data cubes from the James Clerk Maxwell Telescope. It is part of the Starlink software collection (ascl:1110.012).

  7. COLDMON -- Cold File Analysis Package

    NASA Astrophysics Data System (ADS)

    Rawlinson, D. J.

    The COLDMON package has been written to allow system managers to identify those items of software that are not used (or used infrequently) on their systems. It consists of a few command procedures and a Fortran program to analyze the results. It makes use of the AUDIT facility and security ACLs in VMS.

  8. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  9. FDCSUSYDecay: An MSSM decay package

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Wang, Jian-xiong

    2007-08-01

    FDCSUSYDecay is a FORTRAN program package generated by FDC (Feynman Diagram Calculation) system fully automatically. It is dedicated to calculate at tree-level all the possible 2-body decays of SUSY and Higgs particles in the Minimal Supersymmetric extension of the Standard Model (MSSM). The format of its output files complies with SUSY Les Houches Accord and can be easily imported by other packages. Program summaryManuscript title:FDCSUSYDecay: An MSSM decay package Authors:Wei Qi, Jian-xiong Wang Program title:FDCSUSYDecay (Version 1.00) Catalogue identifier:ADYV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYV_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:22 008 No. of bytes in distributed program, including test data, etc.:622 751 Distribution format:tar.gz Programming language:FORTRAN 77 Operating system:Linux Keywords:SUSY decay, MSSM, FDC PACS:02.70.-c, 12.60.Jv Classification:11.1, 11.6 External routines:CERNLIB 2003 (or up) Nature of problem: This package can calculate all the possible SUSY particle and Higgs 2-body decay width and branch ratio at tree-level in the MSSM model. Solution method: By running FDC, the Feynman rules for the MSSM model are generated, all the decay widths are calculated analytically and corresponding FORTRAN codes are generated for this package. Running time: Less than 1 second for both high-scale and low-scale modes on a Pentium IV 2.4 GHz machine (512 MB memory).

  10. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  11. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  12. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  13. Parallelization of Rocket Engine Simulator Software (PRESS)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1997-01-01

    Parallelization of Rocket Engine System Software (PRESS) project is part of a collaborative effort with Southern University at Baton Rouge (SUBR), University of West Florida (UWF), and Jackson State University (JSU). The second-year funding, which supports two graduate students enrolled in our new Master's program in Computer Science at Hampton University and the principal investigator, have been obtained for the period from October 19, 1996 through October 18, 1997. The key part of the interim report was new directions for the second year funding. This came about from discussions during Rocket Engine Numeric Simulator (RENS) project meeting in Pensacola on January 17-18, 1997. At that time, a software agreement between Hampton University and NASA Lewis Research Center had already been concluded. That agreement concerns off-NASA-site experimentation with PUMPDES/TURBDES software. Before this agreement, during the first year of the project, another large-scale FORTRAN-based software, Two-Dimensional Kinetics (TDK), was being used for translation to an object-oriented language and parallelization experiments. However, that package proved to be too complex and lacking sufficient documentation for effective translation effort to the object-oriented C + + source code. The focus, this time with better documented and more manageable PUMPDES/TURBDES package, was still on translation to C + + with design improvements. At the RENS Meeting, however, the new impetus for the RENS projects in general, and PRESS in particular, has shifted in two important ways. One was closer alignment with the work on Numerical Propulsion System Simulator (NPSS) through cooperation and collaboration with LERC ACLU organization. The other was to see whether and how NASA's various rocket design software can be run over local and intra nets without any radical efforts for redesign and translation into object-oriented source code. There were also suggestions that the Fortran based code be

  14. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  15. Parallel optics technology assessment for the versatile link project

    SciTech Connect

    Chramowicz, J.; Kwan, S.; Rivera, R.; Prosser, A.; /Fermilab

    2011-01-01

    This poster describes the assessment of commercially available and prototype parallel optics modules for possible use as back end components for the Versatile Link common project. The assessment covers SNAP12 transmitter and receiver modules as well as optical engine technologies in dense packaging options. Tests were performed using vendor evaluation boards (SNAP12) as well as custom evaluation boards (optical engines). The measurements obtained were used to compare the performance of these components with single channel SFP+ components operating at a transmission wavelength of 850 nm over multimode fibers.

  16. Radiation-hard/high-speed parallel optical links

    NASA Astrophysics Data System (ADS)

    Gan, K. K.; Buchholz, P.; Heidbrink, S.; Kagan, H. P.; Kass, R. D.; Moore, J.; Smith, D. S.; Vogt, M.; Ziolkowski, M.

    2016-09-01

    We have designed and fabricated a compact parallel optical engine for transmitting data at 5 Gb/s. The device consists of a 4-channel ASIC driving a VCSEL (Vertical Cavity Surface Emitting Laser) array in an optical package. The ASIC is designed using only core transistors in a 65 nm CMOS process to enhance the radiation-hardness. The ASIC contains an 8-bit DAC to control the bias and modulation currents of the individual channels in the VCSEL array. The performance of the optical engine up at 5 Gb/s is satisfactory.

  17. Introducing data parallelism into climate model post-processing through a parallel version of the NCAR Command Language (NCL)

    NASA Astrophysics Data System (ADS)

    Jacob, R. L.; Xu, X.; Krishna, J.; Tautges, T.

    2011-12-01

    The relationship between the needs of post-processing climate model output and the capability of the available tools has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old analysis workflow. The tools used to implement that workflow are now a bottleneck in the climate science discovery processes. This crisis will only worsen as ultra-high resolution global climate models with horizontal scales of 4 km or smaller, running on leadership computing facilities, begin to produce tens to hundreds of terabytes for a single, hundred-year climate simulation. While climate models have used parallelism for several years, the post-processing tools are still mostly single-threaded applications. We have created a Parallel Climate Analysis Library (ParCAL) which implements many common climate analysis operations in a data-parallel fashion using the Message Passing Interface. ParCAL has in turn been built on sophisticated packages for describing grids in parallel (the Mesh Oriented database (MOAB) and for performing vector operations on arbitrary grids (Intrepid). ParCAL is also using parallel I/O through the PnetCDF library. ParCAL has been used to implement a parallel version of the NCAR Command Language (NCL). ParNCL/ParCAL not only speeds up analysis of large datasets but also allows operations to be performed on native grids, eliminating the need to transform everything to latitude-longitude grids. In most cases, users NCL scripts can run unaltered in parallel using ParNCL.

  18. Reference datasets for bioequivalence trials in a two-group parallel design.

    PubMed

    Fuglsang, Anders; Schütz, Helmut; Labes, Detlew

    2015-03-01

    In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.

  19. Sensory impacts of food-packaging interactions.

    PubMed

    Duncan, Susan E; Webster, Janet B

    2009-01-01

    Sensory changes in food products result from intentional or unintentional interactions with packaging materials and from failure of materials to protect product integrity or quality. Resolving sensory issues related to plastic food packaging involves knowledge provided by sensory scientists, materials scientists, packaging manufacturers, food processors, and consumers. Effective communication among scientists and engineers from different disciplines and industries can help scientists understand package-product interactions. Very limited published literature describes sensory perceptions associated with food-package interactions. This article discusses sensory impacts, with emphasis on oxidation reactions, associated with the interaction of food and materials, including taints, scalping, changes in food quality as a function of packaging, and examples of material innovations for smart packaging that can improve sensory quality of foods and beverages. Sensory evaluation is an important tool for improved package selection and development of new materials. PMID:19389606

  20. Amesos2 Templated Direct Sparse Solver Package

    2011-05-24

    Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

  1. Flat-package DIP handling tool

    NASA Technical Reports Server (NTRS)

    Angelou, E.; Fraser, R.

    1977-01-01

    Device, using magnetic attraction, can facilitate handling of integrated-circuit flat packages and prevent contamination and bent leads. Tool lifts packages by their cases and releases them by operation of manual plunger.

  2. Packaging and the environment: a regulation update.

    PubMed

    Fielding, P

    2000-04-01

    This article reports on the progress that is being made on standards and associated documents to support the Packaging and Packaging Waste Directive. Potential revisions are also discussed. PMID:10947328

  3. POLO: a gigabyte/s parallel optical link

    NASA Astrophysics Data System (ADS)

    Hahn, Kenneth H.; Dolfi, David W.

    1996-01-01

    The Parallel Optical Link Organization (POLO) is an ARPA sponsored industry consortium consisting of four companies and one university. The members are Hewlett-Packard, AMP, Du Pont, SDL, and the University of Southern California. The consortium's goal is to develop a high speed (1 Gbyte/s) parallel optical interconnect module for applications in central office switching environments and clustered computing. Previous reports have described the general layout of the POLO interconnect module and reported preliminary results. In this paper, we discuss further progress to date on the POLO module and show results for a 10 channel module operating at 622 Mb/s per channel. In addition, we discuss the current performance limitations of the module, packaging issues associated with assembly, a testbed which utilizes the POLO interconnect for the transmission of high resolution images between workstations, and plans for the second generation POLO module.

  4. 7 CFR 65.130 - Consumer package.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Consumer package. 65.130 Section 65.130 Agriculture... OF BEEF, PORK, LAMB, CHICKEN, GOAT MEAT, PERISHABLE AGRICULTURAL COMMODITIES, MACADAMIA NUTS, PECANS, PEANUTS, AND GINSENG General Provisions Definitions § 65.130 Consumer package. Consumer package means...

  5. 7 CFR 65.130 - Consumer package.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Consumer package. 65.130 Section 65.130 Agriculture... OF BEEF, PORK, LAMB, CHICKEN, GOAT MEAT, PERISHABLE AGRICULTURAL COMMODITIES, MACADAMIA NUTS, PECANS, PEANUTS, AND GINSENG General Provisions Definitions § 65.130 Consumer package. Consumer package means...

  6. 27 CFR 6.93 - Combination packaging.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and distributing distilled spirits, wine, or malt beverages in combination... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Combination packaging....

  7. 7 CFR 33.6 - Package.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Package. 33.6 Section 33.6 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.6 Package. Package means any container...

  8. 27 CFR 6.93 - Combination packaging.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Combination packaging. 6..., DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and distributing distilled spirits, wine, or malt beverages in...

  9. 9 CFR 354.72 - Packaging.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Packaging. 354.72 Section 354.72... CERTIFICATION VOLUNTARY INSPECTION OF RABBITS AND EDIBLE PRODUCTS THEREOF Supervision of Marking and Packaging § 354.72 Packaging. No container which bears or may bear any official identification or any...

  10. 76 FR 30551 - Specifications for Packagings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-26

    ... Pipeline and Hazardous Materials Safety Administration 49 CFR Part 178 Specifications for Packagings CFR... on a packaging, a test report must be prepared. The test report must be maintained at each location where the packaging is manufactured and each location where the design qualification tests are...

  11. EDExpress Packaging Training, 2001-2002.

    ERIC Educational Resources Information Center

    Office of Student Financial Assistance (ED), Washington, DC.

    Packaging is the process of finding the best combination of aid to meet a student's financial need for college, given limited resources and the institutional constraints that vary from school to school. This guide to packaging under the EDExpress software system outlines three steps to packaging. The first is determining the student's need for…

  12. 7 CFR 58.640 - Packaging.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Packaging. 58.640 Section 58.640 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.640 Packaging. The packaging of the semifrozen product shall be done by means which will...

  13. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  14. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  15. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  16. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  17. 27 CFR 6.93 - Combination packaging.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Combination packaging. 6.93 Section 6.93 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and...

  18. 27 CFR 6.93 - Combination packaging.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Combination packaging. 6.93 Section 6.93 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and...

  19. 27 CFR 6.93 - Combination packaging.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Combination packaging. 6.93 Section 6.93 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and...

  20. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (c)-(e) (f) Detonators containing no more than 1 g explosive (excluding ignition and delay charges... which case they are excepted from the packaging requirements of § 173.62: (1) No more than 50 detonators... compartment is used as the outer packaging; (3) No more than 1000 detonators in one outer packaging; and...

  1. 21 CFR 820.130 - Device packaging.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Device packaging. 820.130 Section 820.130 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Labeling and Packaging Control § 820.130 Device packaging. Each...

  2. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  3. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  4. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  5. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  6. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  7. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... consistent with the Food and Drug Administration's regulations regarding such guaranties (21 CFR 7.12 and 7... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or...

  8. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  9. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  10. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  11. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  12. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  13. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... consistent with the Food and Drug Administration's regulations regarding such guaranties (21 CFR 7.12 and 7... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or...

  14. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... consistent with the Food and Drug Administration's regulations regarding such guaranties (21 CFR 7.12 and 7... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or...

  15. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... consistent with the Food and Drug Administration's regulations regarding such guaranties (21 CFR 7.12 and 7... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or...

  16. 7 CFR 58.640 - Packaging.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Packaging. 58.640 Section 58.640 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.640 Packaging. The packaging of the semifrozen product shall be done by means which will...

  17. 49 CFR 172.514 - Bulk packagings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Bulk packagings. 172.514 Section 172.514... SECURITY PLANS Placarding § 172.514 Bulk packagings. (a) Except as provided in paragraph (c) of this section, each person who offers for transportation a bulk packaging which contains a hazardous...

  18. Xyce Parallel Electronic Simulator - User's Guide, Version 1.0

    SciTech Connect

    HUTCHINSON, SCOTT A; KEITER, ERIC R.; HOEKSTRA, ROBERT J.; WATERS, LON J.; RUSSO, THOMAS V.; RANKIN, ERIC LAMONT; WIX, STEVEN D.

    2002-11-01

    This manual describes the use of the Xyce Parallel Electronic Simulator code for simulating electrical circuits at a variety of abstraction levels. The Xyce Parallel Electronic Simulator has been written to support,in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on improving the capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). (4) Object-oriented code design and implementation using modern coding-practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows. Another feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce Parallel Electronic Simulator is designed to support a variety of device model inputs. These input formats include standard analytical models, behavioral models

  19. Development of a Package for Modeling Stress in the Lithosphere

    NASA Astrophysics Data System (ADS)

    Williams, C. A.

    2006-05-01

    One of the primary difficulties when modeling stresses in the Earth's lithosphere is finding a numerical code with the necessary capabilities. The lithosphere represents a unique challenge due to structural complexity, the presence of faults, complex materials with large spatial variations in their properties, a wide range of pertinent spatial and temporal scales, and interactions with other processes (such as mantle convection), leading to complex boundary conditions. To address such problems, a modeling package should have a number of features that are not generally found in combination. The code should be able to use a number of different element types, allowing the geometry to be represented using any desired meshing package. The code should be able to accurately represent fault behavior, allowing both kinematic specification of fault slip as well as fault behavior defined by a constitutive relationship. The code should include a number of different material models (various combinations of elastic, viscous, and plastic behavior) and should also provide an easy mechanism for adding new material models. The code should be parallel and scalable, allowing the simulation of problems over a wide range of spatial scales and resolutions. The code should also be able to easily interact with other modeling codes, which could address some of the issues related to representing multiple time scales, as well as aiding in the determination of appropriate boundary conditions. Finally, the code should be easy to use, modular, and easily adaptable to different needs. We describe the current status, development plans and example usage of a finite element code with the above features as design goals. The current quasi-static finite element code (LithoMop) is being merged with the EqSim dynamic rupture code to form a new modeling package to be named PyLith. The code makes use of the Pyre simulation framework, with top-level code written in Python. This provides a number of useful

  20. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  1. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  2. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  3. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  4. The `TTIME' Package: Performance Evaluation in a Cluster Computing Environment

    NASA Astrophysics Data System (ADS)

    Howe, Marico; Berleant, Daniel; Everett, Albert

    2011-06-01

    The objective of translating developmental event time across mammalian species is to gain an understanding of the timing of human developmental events based on known time of those events in animals. The potential benefits include improvements to diagnostic and intervention capabilities. The CRAN `ttime' package provides the functionality to infer unknown event timings and investigate phylogenetic proximity utilizing hierarchical clustering of both known and predicted event timings. The original generic mammalian model included nine eutherian mammals: Felis domestica (cat), Mustela putorius furo (ferret), Mesocricetus auratus (hamster), Macaca mulatta (monkey), Homo sapiens (humans), Mus musculus (mouse), Oryctolagus cuniculus (rabbit), Rattus norvegicus (rat), and Acomys cahirinus (spiny mouse). However, the data for this model is expected to grow as more data about developmental events is identified and incorporated into the analysis. Performance evaluation of the `ttime' package across a cluster computing environment versus a comparative analysis in a serial computing environment provides an important computational performance assessment. A theoretical analysis is the first stage of a process in which the second stage, if justified by the theoretical analysis, is to investigate an actual implementation of the `ttime' package in a cluster computing environment and to understand the parallelization process that underlies implementation.

  5. Experimental Parallel-Processing Computer

    NASA Technical Reports Server (NTRS)

    Mcgregor, J. W.; Salama, M. A.

    1986-01-01

    Master processor supervises slave processors, each with its own memory. Computer with parallel processing serves as inexpensive tool for experimentation with parallel mathematical algorithms. Speed enhancement obtained depends on both nature of problem and structure of algorithm used. In parallel-processing architecture, "bank select" and control signals determine which one, if any, of N slave processor memories accessible to master processor at any given moment. When so selected, slave memory operates as part of master computer memory. When not selected, slave memory operates independently of main memory. Slave processors communicate with each other via input/output bus.

  6. Chip Scale Package Implementation Challenges

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1998-01-01

    The JPL-led MicrotypeBGA Consortium of enterprises representing government agencies and private companies have jointed together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects. In the process of building the Consortium CSP test vehicles, many challenges were identified regarding various aspects of technology implementation. This paper will present our experience in the areas of technology implementation challenges, including design and building both standard and microvia boards, and assembly of two types of test vehicles. We also discuss the most current package isothermal aging to 2,000 hours at 100 C and 125 C and thermal cycling test results to 1,700 cycles in the range of -30 to 100 C.

  7. Microwave thawing package and method

    DOEpatents

    Fathi, Zakaryae; Lauf, Robert J.

    2004-03-16

    A package for containing frozen liquids during an electromagnetic thawing process includes: a first section adapted for containing a frozen material and exposing the frozen material to electromagnetic energy; a second section adapted for receiving thawed liquid material and shielding the thawed liquid material from further exposure to electromagnetic energy; and a fluid communication means for allowing fluid flow between the first section and the second section.

  8. Transportation and packaging resource guide

    SciTech Connect

    Arendt, J.W.; Gove, R.M.; Welch, M.J.

    1994-12-01

    The purpose of this resource guide is to provide a convenient reference document of information that may be useful to the U.S. Department of Energy (DOE) and DOE contractor personnel involved in packaging and transportation activities. An attempt has been made to present the terminology of DOE community usage as it currently exists. DOE`s mission is changing with emphasis on environmental cleanup. The terminology or nomenclature that has resulted from this expanded mission is included for the packaging and transportation user for reference purposes. Older terms still in use during the transition have been maintained. The Packaging and Transportation Resource Guide consists of four sections: Sect. 1, Introduction; Sect. 2, Abbreviations and Acronyms; Sect. 3, Definitions; and Sect. 4, References for packaging and transportation of hazardous materials and related activities, and Appendices A and B. Information has been collected from DOE Orders and DOE documents; U.S Department of Transportation (DOT), U.S. Environmental Protection Agency (EPA), and U.S. Nuclear Regulatory Commission (NRC) regulations; and International Atomic Energy Agency (IAEA) standards and other international documents. The definitions included in this guide may not always be a regulatory definition but are the more common DOE usage. In addition, the definitions vary among regulatory agencies. It is, therefore, suggested that if a definition is to be used in a regulatory or a legal compliance issue, the definition should be verified with the appropriate regulation. To assist in locating definitions in the regulations, a listing of all definition sections in the regulations are included in Appendix B. In many instances, the appropriate regulatory reference is indicated in the right-hand margin.

  9. Radioactive material package seal tests

    SciTech Connect

    Madsen, M.M.; Humphreys, D.L.; Edwards, K.R.

    1990-01-01

    General design or test performance requirements for radioactive materials (RAM) packages are specified in Title 10 of the US Code of Federal Regulations Part 71 (US Nuclear Regulatory Commission, 1983). The requirements for Type B packages provide a broad range of environments under which the system must contain the RAM without posing a threat to health or property. Seals that provide the containment system interface between the packaging body and the closure must function in both high- and low-temperature environments under dynamic and static conditions. A seal technology program, jointly funded by the US Department of Energy Office of Environmental Restoration and Waste Management (EM) and the Office of Civilian Radioactive Waste Management (OCRWM), was initiated at Sandia National Laboratories. Experiments were performed in this program to characterize the behavior of several static seal materials at low temperatures. Helium leak tests on face seals were used to compare the materials. Materials tested include butyl, neoprene, ethylene propylene, fluorosilicone, silicone, Eypel, Kalrez, Teflon, fluorocarbon, and Teflon/silicone composites. Because most elastomer O-ring applications are for hydraulic systems, manufacturer low-temperature ratings are based on methods that simulate this use. The seal materials tested in this program with a fixture similar to a RAM cask closure, with the exception of silicone S613-60, are not leak tight (1.0 {times} 10{sup {minus}7} std cm{sup 3}/s) at manufacturer low-temperature ratings. 8 refs., 3 figs., 1 tab.

  10. Auxiliary propulsion system flight package

    NASA Technical Reports Server (NTRS)

    Collett, C. R.

    1987-01-01

    Hughes Aircraft Company developed qualified and integrated flight, a flight test Ion Auxiliary Propulsion System (IAPS), on an Air Force technology satellite. The IAPS Flight Package consists of two identical Thruster Subsystems and a Diagnostic Subsystem. Each thruster subsystem (TSS) is comprised of an 8-cm ion Thruster-Gimbal-Beam Shield Unit (TGBSU); Power Electronics Unit; Digital Controller and Interface Unit (DCIU); and Propellant Tank, Valve and Feed Unit (PTVFU) plus the requisite cables. The Diagnostic Subsystem (DSS) includes four types of sensors for measuring the effect of the ion thrusters on the spacecraft and the surrounding plasma. Flight qualifications of IAPS, prior to installation on the spacecraft, consisted of performance, vibration and thermal-vacuum testing at the unit level, and thermal-vacuum testing at the subsystem level. Mutual compatibility between IAPS and the host spacecraft was demonstrated during a series of performance and environmental tests after the IAPS Flight Package was installed on the spacecraft. After a spacecraft acoustic test, performance of the ion thrusters was reverified by removing the TGBSUs for a thorough performance test at Hughes Research Laboratories (HRL). The TGBSUs were then reinstalled on the spacecraft. The IAPS Flight Package is ready for flight testing when Shuttle flights are resumed.

  11. Active packaging with antifungal activities.

    PubMed

    Nguyen Van Long, N; Joly, Catherine; Dantigny, Philippe

    2016-03-01

    There have been many reviews concerned with antimicrobial food packaging, and with the use of antifungal compounds, but none provided an exhaustive picture of the applications of active packaging to control fungal spoilage. Very recently, many studies have been done in these fields, therefore it is timely to review this topic. This article examines the effects of essential oils, preservatives, natural products, chemical fungicides, nanoparticles coated to different films, and chitosan in vitro on the growth of moulds, but also in vivo on the mould free shelf-life of bread, cheese, and fresh fruits and vegetables. A short section is also dedicated to yeasts. All the applications are described from a microbiological point of view, and these were sorted depending on the name of the species. Methods and results obtained are discussed. Essential oils and preservatives were ranked by increased efficacy on mould growth. For all the tested molecules, Penicillium species were shown more sensitive than Aspergillus species. However, comparison between the results was difficult because it appeared that the efficiency of active packaging depended greatly on the environmental factors of food such as water activity, pH, temperature, NaCl concentration, the nature, the size, and the mode of application of the films, in addition to the fact that the amount of released antifungal compounds was not constant with time.

  12. Waste Package Design Methodology Report

    SciTech Connect

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  13. "Programmed packaging" for gene delivery.

    PubMed

    Hyodo, M; Sakurai, Y; Akita, H; Harashima, H

    2014-11-10

    We report on the development of a multifunctional envelope-type nano device (MEND) based on our packaging concept "Programmed packaging" to control not only intracellular trafficking but also the biodistribution of encapsulated compounds such as nucleic acids/proteins/peptides. Our strategy for achieving this is based on molecular mechanisms of cell biology such as endocytosis, vesicular trafficking, etc. In this review, we summarize the concept of programmed packaging and discuss some of our recent successful examples of using MENDs. Systematic evolution of ligands by exponential enrichment (SELEX) was applied as a new methodology for identifying a new ligand toward cell or mitochondria. The delivery of siRNA to tumors and the tumor vasculature was achieved using pH sensitive lipid (YSK05), which was newly designed and optimized under in vivo conditions. The efficient delivery of pDNA to immune cells such as dendritic cells has also been developed using the KALA ligand, which can be a breakthrough technology for DNA vaccine. Finally, ss-cleavable and pH-activated lipid-like surfactant (ssPalm) which is a lipid like material with pH-activatable and SS-cleavable properties is also introduced as a proof of our concept. PMID:24780263

  14. Active packaging with antifungal activities.

    PubMed

    Nguyen Van Long, N; Joly, Catherine; Dantigny, Philippe

    2016-03-01

    There have been many reviews concerned with antimicrobial food packaging, and with the use of antifungal compounds, but none provided an exhaustive picture of the applications of active packaging to control fungal spoilage. Very recently, many studies have been done in these fields, therefore it is timely to review this topic. This article examines the effects of essential oils, preservatives, natural products, chemical fungicides, nanoparticles coated to different films, and chitosan in vitro on the growth of moulds, but also in vivo on the mould free shelf-life of bread, cheese, and fresh fruits and vegetables. A short section is also dedicated to yeasts. All the applications are described from a microbiological point of view, and these were sorted depending on the name of the species. Methods and results obtained are discussed. Essential oils and preservatives were ranked by increased efficacy on mould growth. For all the tested molecules, Penicillium species were shown more sensitive than Aspergillus species. However, comparison between the results was difficult because it appeared that the efficiency of active packaging depended greatly on the environmental factors of food such as water activity, pH, temperature, NaCl concentration, the nature, the size, and the mode of application of the films, in addition to the fact that the amount of released antifungal compounds was not constant with time. PMID:26803804

  15. The Model 9977 Radioactive Material Packaging Primer

    SciTech Connect

    Abramczyk, G.

    2015-10-09

    The Model 9977 Packaging is a single containment drum style radioactive material (RAM) shipping container designed, tested and analyzed to meet the performance requirements of Title 10 the Code of Federal Regulations Part 71. A radioactive material shipping package, in combination with its contents, must perform three functions (please note that the performance criteria specified in the Code of Federal Regulations have alternate limits for normal operations and after accident conditions): Containment, the package must “contain” the radioactive material within it; Shielding, the packaging must limit its users and the public to radiation doses within specified limits; and Subcriticality, the package must maintain its radioactive material as subcritical

  16. RECLAMATION OF RADIOACTIVE MATERIAL PACKAGING COMPONENTS

    SciTech Connect

    Abramczyk, G.; Nathan, S.; Loftin, B.; Bellamy, S.

    2011-06-06

    Radioactive material packages are withdrawn from use for various reasons; loss of mission, decertification, damage, replacement, etc. While the packages themselves may be decertified, various components may still be able to perform to their required standards and find useful service. The Packaging Technology and Pressurized Systems group of the Savannah River National Laboratory has been reducing the cost of producing new Type B Packagings by reclaiming, refurbishing, and returning to service the containment vessels from older decertified packagings. The program and its benefits are presented.

  17. UWV (Unmanned Water Vehicle) - Umbra Package v. 1.0

    SciTech Connect

    Fred Oppel, SNL 06134

    2012-09-13

    This package contains modules that model the mobility of systems moving in the water. This package currently models first order physics -basically a velocity integrator. This package depends on interface classes (typically base classes) that reside in the Mobility package.

  18. 49 CFR 178.915 - General Large Packaging standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Large Packaging. (d) A Large Packaging consisting of packagings within a framework must be so constructed that the packaging is not damaged by the framework and is retained within the framework at...

  19. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  20. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  1. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  2. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  3. Metal structures with parallel pores

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  4. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  5. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  6. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  7. Portable, parallel, reusable Krylov space codes

    SciTech Connect

    Smith, B.; Gropp, W.

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  8. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  9. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  10. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  11. EXAMINATION OF SHIPPING PACKAGE 9975-05050

    SciTech Connect

    Daugherty, W.

    2014-11-06

    Shipping package 9975-05050 was examined in K-Area following its identification as a high wattage package. Elevated temperature and fiberboard moisture content are key parameters that impact the degradation rate of fiberboard within 9975 packages in a storage environment. The high wattage of this package contributes significantly to component temperatures. After examination in K-Area, the package was provided to SRNL for further examination of the fiberboard assembly. The moisture content of the fiberboard was relatively low (compared to packages examined previously), but the moisture gradient (between fiberboard ID and OD surfaces) was relatively high, as would be expected for the high heat load. The cane fiberboard appeared intact and displayed no apparent change in integrity relative to a new package.

  12. Study of the characteristics of the grains in the coma background and in the jets in comet 67P/C-G, as observed by VIRTIS-M onboard of the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Tozzi, Gian-Paolo; Rinaldi, G.; Fink, U.; Doose, L.; Capaccioni, F.; Filacchione, G.; Bockelée-Morvan, D.; Erard, S.; Leyrat, C.; Arnold, G.; Blecka, M.; Capria, M. T.; Ciarniello, M.; Combi, M.; Faggi, S.; Irwin, P.; Migliorini, A.; Paolomba, E.; Piccioni, G.; Tosi, F.

    2015-11-01

    We report observations of the coma of the comet 67P/C-G performed in the near-IR by VIRTIS-M during the escort phase in April 2015. We selected observations performed when the spacecraft was at about 150 km from the nucleus, in order to cover the greatest part of the coma.We have chosen observations: a) with a diffuse coma without any evident strong jets and b) with strong jets originating from the “neck” region of the nucleus.We analyzed the in changes intensity and spectral behavior of the coma along the projected nucleocentric distance, for both the diffuse coma and for the jets.The results show that:- The emission of the grains in the diffuse coma is going as 1/rho in the FoV of VIRTIS, (about 2 km), suggesting the absence of grain fragmentation or sublimation. In the region close to the surface, within about 400 m, there is an increase of the emission, which is probably due to instrumental scattered light from the nucleus that can hide the effects due to the grains acceleration.- Also for the grains in the jets there is no evidence of fragmentation or sublimation in the spectral region where the scattering of the solar radiation is the mechanism of emission. Instead in the thermal region there are strong variations between the regions close to the nucleus and the farther ones.The authors would like to thank ASI (I), CNES (F), DLR (D), NASA (USA) for supporting this research. VIRTIS was built by a consortium formed by Italy, France and Germany, under the scientific responsibility of the “Istituto di Astrofisica e Planetologia Spaziale” of INAF (I), which guides also the scientific operations. The consortium includes also the “Laboratoire d'études spatiales et d'instrumentation en Astrophysique” of the Observatoire de Paris (F), and the “Institut für Planetenforschung” of DLR (D). The authors wish to thank the Rosetta Science Ground Segment and the Rosetta Mission Operations Centre of ESA for their continuous support.

  13. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  14. Xyce Parallel Electronic Simulator : users' guide, version 2.0.

    SciTech Connect

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont; Fixel, Deborah A.; Russo, Thomas V.; Keiter, Eric Richard; Hutchinson, Scott Alan; Pawlowski, Roger Patrick; Wix, Steven D.

    2004-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce

  15. 49 CFR 178.905 - Large Packaging identification codes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Large Packaging identification codes. 178.905... PACKAGINGS Large Packagings Standards § 178.905 Large Packaging identification codes. Large packaging code... letter(s) specified in paragraph (b) of this section. (a) Large packaging code number designations are...

  16. 49 CFR 178.935 - Standards for wooden Large Packagings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Standards for wooden Large Packagings. 178.935... PACKAGINGS Large Packagings Standards § 178.935 Standards for wooden Large Packagings. (a) The provisions in this section apply to wooden Large Packagings intended to contain solids. Wooden Large Packaging...

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  19. Time-Dependent, Parallel Neutral Particle Transport Code System.

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and themore » Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D

  20. ParCAT: A Parallel Climate Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Haugen, B.; Smith, B.; Steed, C.; Ricciuto, D. M.; Thornton, P. E.; Shipman, G.

    2012-12-01

    Climate science has employed increasingly complex models and simulations to analyze the past and predict the future of our climate. The size and dimensionality of climate simulation data has been growing with the complexity of the models. This growth in data is creating a widening gap between the data being produced and the tools necessary to analyze large, high dimensional data sets. With single run data sets increasing into 10's, 100's and even 1000's of gigabytes, parallel computing tools are becoming a necessity in order to analyze and compare climate simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools that efficiently use parallel computing techniques to narrow the gap between data set size and analysis tools. ParCAT was created as a collaborative effort between climate scientists and computer scientists in order to provide efficient parallel implementations of the computing tools that are of use to climate scientists. Some of the basic functionalities included in the toolkit are the ability to compute spatio-temporal means and variances, differences between two runs and histograms of the values in a data set. ParCAT is designed to facilitate the "heavy lifting" that is required for large, multidimensional data sets. The toolkit does not focus on performing the final visualizations and presentation of results but rather, reducing large data sets to smaller, more manageable summaries. The output from ParCAT is provided in commonly used file formats (NetCDF, CSV, ASCII) to allow for simple integration with other tools. The toolkit is currently implemented as a command line utility, but will likely also provide a C library for developers interested in tighter software integration. Elements of the toolkit are already being incorporated into projects such as UV-CDAT and CMDX. There is also an effort underway to implement portions of the CCSM Land Model Diagnostics package using ParCAT in conjunction with Python and gnuplot. Par