Science.gov

Sample records for parallel pcg package

  1. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  2. PCG: A software package for the iterative solution of linear systems on scalar, vector and parallel computers

    SciTech Connect

    Joubert, W.; Carey, G.F.

    1994-12-31

    A great need exists for high performance numerical software libraries transportable across parallel machines. This talk concerns the PCG package, which solves systems of linear equations by iterative methods on parallel computers. The features of the package are discussed, as well as techniques used to obtain high performance as well as transportability across architectures. Representative numerical results are presented for several machines including the Connection Machine CM-5, Intel Paragon and Cray T3D parallel computers.

  3. PCG reference manual: A package for the iterative solution of large sparse linear systems on parallel computers. Version 1.0

    SciTech Connect

    Joubert, W.D.; Carey, G.F.; Kohli, H.; Lorber, A.; McLay, R.T.; Shen, Y.; Berner, N.A. |; Kalhan, A. |

    1995-01-01

    PCG (Preconditioned Conjugate Gradient package) is a system for solving linear equations of the form Au = b, for A a given matrix and b and u vectors. PCG, employing various gradient-type iterative methods coupled with preconditioners, is designed for general linear systems, with emphasis on sparse systems such as these arising from discretization of partial differential equations arising from physical applications. It can be used to solve linear equations efficiently on parallel computer architectures. Much of the code is reusable across architectures and the package is portable across different systems; the machines that are currently supported is listed. This manual is intended to be the general-purpose reference describing all features of the package accessible to the user; suggestions are also given regarding which methods to use for a given problem.

  4. A parallel PCG solver for MODFLOW.

    PubMed

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree.

  5. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha Anne.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica L.

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  6. Hybrid Optimization Parallel Search PACKage

    SciTech Connect

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.

  7. Parallel Climate Data Assimilation PSAS Package

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Chan, Clara; Gennery, Donald B.; Ferraro, Robert D.

    1996-01-01

    We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512node Intel Paragon. The equation solver achieves a sustained 18 Gflops performance. As the results, we achieved an unprecedented 100-fold solution time reduction on the Intel Paragon parallel platform over the Cray C90. This not only meets and exceeds the DAO time requirements, but also significantly enlarges the window of exploration in climate data assimilations.

  8. Parallel Climate Data Assimilation PSAS Package

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Chan, Clara; Gennery, Donald B.; Ferraro, Robert D.

    1996-01-01

    We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512node Intel Paragon. The equation solver achieves a sustained 18 Gflops performance. As the results, we achieved an unprecedented 100-fold solution time reduction on the Intel Paragon parallel platform over the Cray C90. This not only meets and exceeds the DAO time requirements, but also significantly enlarges the window of exploration in climate data assimilations.

  9. High density packaging and interconnect of massively parallel image processors

    NASA Technical Reports Server (NTRS)

    Carson, John C.; Indin, Ronald J.

    1991-01-01

    This paper presents conceptual designs for high density packaging of parallel processing systems. The systems fall into two categories: global memory systems where many processors are packaged into a stack, and distributed memory systems where a single processor and many memory chips are packaged into a stack. Thermal behavior and performance are discussed.

  10. AZTEC: A parallel iterative package for the solving linear systems

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  11. On the performance of a simple parallel implementation of the ILU-PCG for the Poisson equation on irregular domains

    NASA Astrophysics Data System (ADS)

    Gibou, Frédéric; Min, Chohong

    2012-05-01

    We report on the performance of a parallel algorithm for solving the Poisson equation on irregular domains. We use the spatial discretization of Gibou et al. (2002) [6] for the Poisson equation with Dirichlet boundary conditions, while we use a finite volume discretization for imposing Neumann boundary conditions (Ng et al., 2009; Purvis and Burkhalter, 1979) [8,10]. The parallelization algorithm is based on the Cuthill-McKee ordering. Its implementation is straightforward, especially in the case of shared memory machines, and produces significant speedup; about three times on a standard quad core desktop computer and about seven times on a octa core shared memory cluster. The implementation code is posted on the authors' web pages for reference.

  12. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  13. CASIS PCG 6

    NASA Image and Video Library

    2017-06-06

    iss052e000508 (June 6, 2017) --- View of astronaut Jack Fischer working with the Neutron Crystallographic Studies of Human Acetylcholinesterase for the Design of Accelerated Reactivators (CASIS PCG 6) experiment in the Japanese Experiment Module

  14. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  15. RCrawler: An R package for parallel web crawling and scraping

    NASA Astrophysics Data System (ADS)

    Khalil, Salim; Fakir, Mohamed

    RCrawler is a contributed R package for domain-based web crawling and content scraping. As the first implementation of a parallel web crawler in the R environment, RCrawler can crawl, parse, store pages, extract contents, and produce data that can be directly employed for web content mining applications. However, it is also flexible, and could be adapted to other applications. The main features of RCrawler are multi-threaded crawling, content extraction, and duplicate content detection. In addition, it includes functionalities such as URL and content-type filtering, depth level controlling, and a robot.txt parser. Our crawler has a highly optimized system, and can download a large number of pages per second while being robust against certain crashes and spider traps. In this paper, we describe the design and functionality of RCrawler, and report on our experience of implementing it in an R environment, including different optimizations that handle the limitations of R. Finally, we discuss our experimental results.

  16. Massively Parallel Post-Packaging for Microelectromechanical Systems (MEMS)

    DTIC Science & Technology

    2003-03-01

    MEMS, Microelectromechanical Systems, Vacuum Packaging , Localized Heating, Localized Bonding, Packaging, Trimming, Resonator, Encapsulation...II: Selective Encapsulation for MEMS Post-Packaging ................................ 19 4.2.1 Vacuum Packaging Technology Using Localized Aluminum...32 4.2.5 Vacuum Packaging Using Localized CVD Deposition

  17. (PCG) Protein Crystal Growth Canavalin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Canavalin. The major storage protein of leguminous plants and a major source of dietary protein for humans and domestic animals. It is studied in efforts to enhance nutritional value of proteins through protein engineerings. It is isolated from Jack Bean because of it's potential as a nutritional substance. Principal Investigator on STS-26 was Alex McPherson.

  18. A Parallel Teaching Package for Special Education/Industrial Arts.

    ERIC Educational Resources Information Center

    Lenti, Donna M., Comp.; And Others

    This teaching package presents information and materials for use by special and industrial arts educators in teaching learning-disabled students. It may also be of use to guidance counselors and administrators for student counseling and placement. The package is comprised of two primary units. Unit 1 overviews the field of learning disabilities to…

  19. A C++ Thread Package for Concurrent and Parallel Programming

    SciTech Connect

    Jie Chen; William Watson

    1999-11-01

    Recently thread libraries have become a common entity on various operating systems such as Unix, Windows NT and VxWorks. Those thread libraries offer significant performance enhancement by allowing applications to use multiple threads running either concurrently or in parallel on multiprocessors. However, the incompatibilities between native libraries introduces challenges for those who wish to develop portable applications.

  20. VisIt: a component based parallel visualization package

    SciTech Connect

    Ahern, S; Bonnell, K; Brugger, E; Childs, H; Meredith, J; Whitlock, B

    2000-12-18

    We are currently developing a component based, parallel visualization and graphical analysis tool for visualizing and analyzing data on two- and three-dimensional (20, 30) meshes. The tool consists of three primary components: a graphical user interface (GUI), a viewer, and a parallel compute engine. The components are designed to be operated in a distributed fashion with the GUI and viewer typically running on a high performance visualization server and the compute engine running on a large parallel platform. The viewer and compute engine are both based on the Visualization Toolkit (VTK), an open source object oriented data manipulation and visualization library. The compute engine will make use of parallel extensions to VTK, based on MPI, developed by Los Alamos National Laboratory in collaboration with the originators of P K . The compute engine will make use of meta-data so that it only operates on the portions of the data necessary to generate the image. The meta-data can either be created as the post-processing data is generated or as a pre-processing step to using VisIt. VisIt will be integrated with the VIEWS' Tera-Scale Browser, which will provide a high performance visual data browsing capability based on multi-resolution techniques.

  1. penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE

    SciTech Connect

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  2. DL_POLY_2.0: a general-purpose parallel molecular dynamics simulation package.

    PubMed

    Smith, W; Forester, T R

    1996-06-01

    DL_POLY_2.0 is a general-purpose parallel molecular dynamics simulation package developed at Daresbury Laboratory under the auspices of the Council for the Central Laboratory of the Research Councils. Written to support academic research, it has a wide range of applications and is designed to run on a wide range of computers: from single processor workstations to parallel supercomputers. Its structure, functionality, performance, and availability are described.

  3. Cleanup Verification Package for the 100-F-20, Pacific Northwest Laboratory Parallel Pits

    SciTech Connect

    M. J. Appel

    2007-01-22

    This cleanup verification package documents completion of remedial action for the 100-F-20, Pacific Northwest Laboratory Parallel Pits waste site. This waste site consisted of two earthen trenches thought to have received both radioactive and nonradioactive material related to the 100-F Experimental Animal Farm.

  4. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    NASA Astrophysics Data System (ADS)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the

  5. (PCG) Protein Crystal Growth Porcine Elastase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Porcine Elastase. This enzyme is associated with the degradation of lung tissue in people suffering from emphysema. It is useful in studying causes of this disease. Principal Investigator on STS-26 was Charles Bugg.

  6. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  7. parallelMCMCcombine: An R Package for Bayesian Methods for Big Data and Analytics

    PubMed Central

    Miroshnikov, Alexey; Conlon, Erin M.

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field. PMID:25259608

  8. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    PubMed

    Miroshnikov, Alexey; Conlon, Erin M

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  9. parallelnewhybrid: an R package for the parallelization of hybrid detection using newhybrids.

    PubMed

    Wringe, Brendan F; Stanley, Ryan R E; Jeffery, Nicholas W; Anderson, Eric C; Bradbury, Ian R

    2017-01-01

    Hybridization among populations and species is a central theme in many areas of biology, and the study of hybridization has direct applicability to testing hypotheses about evolution, speciation and genetic recombination, as well as having conservation, legal and regulatory implications. Yet, despite being a topic of considerable interest, the identification of hybrid individuals, and quantification of the (un)certainty surrounding the identifications, remains difficult. Unlike other programs that exist to identify hybrids based on genotypic information, newhybrids is able to assign individuals to specific hybrid classes (e.g. F1 , F2 ) because it makes use of patterns of gene inheritance within each locus, rather than just the proportions of gene inheritance within each individual. For each comparison and set of markers, multiple independent runs of each data set should be used to develop an estimate of the hybrid class assignment accuracy. The necessity of analysing multiple simulated data sets, constructed from large genomewide data sets, presents significant computational challenges. To address these challenges, we present parallelnewhybrid, an r package designed to decrease user burden when undertaking multiple newhybrids analyses. parallelnewhybrid does so by taking advantage of the parallel computational capabilities inherent in modern computers to efficiently and automatically execute separate newhybrids runs in parallel. We show that parallelization of analyses using this package affords users several-fold reductions in time over a traditional serial analysis. parallelnewhybrid consists of an example data set, a readme and three operating system-specific functions to execute parallel newhybrids analyses on each of a computer's c cores. parallelnewhybrid is freely available on the long-term software hosting site github (www.github.com/bwringe/parallelnewhybrid).

  10. (PCG) Protein Crystal Growth Isocitrate Lyase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Isocitrate Lyase. Target enzyme for fungicides. A better understanding of this enzyme should lead to the discovery of more potent fungicides to treat serious crop diseases such as rice blast. It regulates the flow of metabolic intermediates required for cell growth. Principal Investigator for STS-26 was Charles Bugg.

  11. (PCG) Protein Crystal Growth Isocitrate Lysase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Isocitrate Lysase. Target enzyme for fungicides. A better understanding of this enzyme should lead to the discovery of more potent fungicides to treat serious crop diseases such as rice blast. It regulates the flow of metabolic intermediates required for cell growth. Principal Investigator on STS-26 was Charles Bugg.

  12. 3-D readout-electronics packaging for high-bandwidth massively paralleled imager

    DOEpatents

    Kwiatkowski, Kris; Lyke, James

    2007-12-18

    Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.

  13. (PCG) Protein Crystal Growth Gamma-Interferon

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Gamma-Interferon. Stimulates the body's immune system and is used clinically in the treatment of cancer. Potential as an anti-tumor agent against solid tumors as well as leukemia's and lymphomas. It has additional utility as an anti-ineffective agent, including antiviral, anti-bacterial, and anti-parasitic activities. Principal Investigator on STS-26 was Charles Bugg.

  14. (PCG) Protein Crystal Growth Gamma-Interferon

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Gamma-Interferon. Stimulates the body's immune system and is used clinically in the treatment of cancer. Potential as an anti-tumor agent against solid tumors as well as leukemia's and lymphomas. It has additional utility as an anti-ineffective agent, including antiviral, anti-bacterial, and anti-parasitic activities. Principal Investigator on STS-26 was Charles Bugg.

  15. (PCG) Protein Crystal Growth Human Serum Albumin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Human Serum Albumin. Contributes to many transport and regulatory processes and has multifunctional binding properties which range from various metals, to fatty acids, hormones, and a wide spectrum of therapeutic drugs. The most abundant protein of the circulatory system. It binds and transports an incredible variety of biological and pharmaceutical ligands throughout the blood stream. Principal Investigator on STS-26 was Larry DeLucas.

  16. Parallel distributed free-space optoelectronic computer engine using flat plug-on-top optics package

    NASA Astrophysics Data System (ADS)

    Berger, Christoph; Ekman, Jeremy T.; Wang, Xiaoqing; Marchand, Philippe J.; Spaanenburg, Henk; Kiamilev, Fouad E.; Esener, Sadik C.

    2000-05-01

    We report about ongoing work on a free-space optical interconnect system, which will demonstrate a Fast Fourier Transformation calculation, distributed among six processor chips. Logically, the processors are arranged in two linear chains, where each element communicates optically with its nearest neighbors. Physically, the setup consists of a large motherboard, several multi-chip carrier modules, which hold the processor/driver chips and the optoelectronic chips (arrays of lasers and detectors), and several plug-on-top optics modules, which provide the optical links between the chip carrier modules. The system design tries to satisfy numerous constraints, such as compact size, potential for mass-production, suitability for large arrays (up to 1024 parallel channels), compatibility with standard electronics fabrication and packaging technology, potential for active misalignment compensation by integration MEMS technology, and suitability for testing different imaging topologies. We present the system architecture together with details of key components and modules, and report on first experiences with prototype modules of the setup.

  17. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  18. ParallelStructure: A R Package to Distribute Parallel Runs of the Population Genetics Program STRUCTURE on Multi-Core Computers

    PubMed Central

    Besnier, Francois; Glover, Kevin A.

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012

  19. ParallelStructure: a R package to distribute parallel runs of the population genetics program STRUCTURE on multi-core computers.

    PubMed

    Besnier, Francois; Glover, Kevin A

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/.

  20. A Portable 3D FFT Package for Distributed-Memory Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Ding, H. Q.; Ferraro, R. D.; Gennery, D. B.

    1995-01-01

    A parallel algorithm for 3D FFTs is implemented as a series of local 1D FFTs combined with data transposes. This allows the use of vendor supplied (often fully optimized) sequential 1D FFTs. The FFTs are carried out in-place by using an in-place data transpose across the processors.

  1. Improving the performance of cardiac abnormality detection from PCG signal

    NASA Astrophysics Data System (ADS)

    Sujit, N. R.; Kumar, C. Santhosh; Rajesh, C. B.

    2016-03-01

    The Phonocardiogram (PCG) signal contains important information about the condition of heart. Using PCG signal analysis prior recognition of coronary illness can be done. In this work, we developed a biomedical system for the detection of abnormality in heart and methods to enhance the performance of the system using SMOTE and AdaBoost technique have been presented. Time and frequency domain features extracted from the PCG signal is input to the system. The back-end classifier to the system developed is Decision Tree using CART (Classification and Regression Tree), with an overall classification accuracy of 78.33% and sensitivity (alarm accuracy) of 40%. Here sensitivity implies the precision obtained from classifying the abnormal heart sound, which is an essential parameter for a system. We further improve the performance of baseline system using SMOTE and AdaBoost algorithm. The proposed approach outperforms the baseline system by an absolute improvement in overall accuracy of 5% and sensitivity of 44.92%.

  2. Chromosomal Distribution of PcG Proteins during Drosophila Development

    PubMed Central

    Nègre, Nicolas; Hennetin, Jérôme; Sun, Ling V; Lavrov, Sergey; Bellis, Michel; White, Kevin P

    2006-01-01

    Polycomb group (PcG) proteins are able to maintain the memory of silent transcriptional states of homeotic genes throughout development. In Drosophila, they form multimeric complexes that bind to specific DNA regulatory elements named PcG response elements (PREs). To date, few PREs have been identified and the chromosomal distribution of PcG proteins during development is unknown. We used chromatin immunoprecipitation (ChIP) with genomic tiling path microarrays to analyze the binding profile of the PcG proteins Polycomb (PC) and Polyhomeotic (PH) across 10 Mb of euchromatin. We also analyzed the distribution of GAGA factor (GAF), a sequence-specific DNA binding protein that is found at most previously identified PREs. Our data show that PC and PH often bind to clustered regions within large loci that encode transcription factors which play multiple roles in developmental patterning and in the regulation of cell proliferation. GAF co-localizes with PC and PH to a limited extent, suggesting that GAF is not a necessary component of chromatin at PREs. Finally, the chromosome-association profile of PC and PH changes during development, suggesting that the function of these proteins in the regulation of some of their target genes might be more dynamic than previously anticipated. PMID:16613483

  3. PCG-STES, Rominger de-activates middeck experiment

    NASA Image and Video Library

    1997-08-27

    STS085-324-007 (7 - 19 August 1997) --- Astronaut Kent V. Rominger, pilot, uses a tool to deactivate the Protein Crystal Growth (PCG) experiment on the mid-deck of the Space Shuttle Discovery near the end of the 12-day STS-85 flight.

  4. Non-conformal and parallel discontinuous Galerkin time domain method for Maxwell's equations: EM analysis of IC packages

    NASA Astrophysics Data System (ADS)

    Dosopoulos, Stylianos; Zhao, Bo; Lee, Jin-Fa

    2013-04-01

    In this article, we present an Interior Penalty discontinuous Galerkin Time Domain (IPDGTD) method on non-conformal meshes. The motivation for a non-conformal IPDGTD comes from the fact there are applications with very complicated geometries (for example, IC packages) where a conformal mesh may be very difficult to obtain. Therefore, the ability to handle non-conformal meshes really comes in handy. In the proposed approach, we first decompose the computational domain into non-overlapping subdomains. Afterward, each sub-domain is meshed independently resulting in non-conformal domain interfaces, but simultaneously providing great flexibility in the meshing process. The non-conformal triangulations at sub-domain interfaces can be naturally supported within the IPDGTD framework. Moreover, a MPI parallelization together with a local time-stepping strategy is applied to significantly increase the efficiency of the method. Furthermore, a general balancing strategy is described. Through a practical example with multi-scale features, it is shown that the proposed balancing strategy leads to better use of the available computational resources and reduces substantially the total simulation time. Finally, numerical results are included to validate the accuracy and demonstrate the flexibilities of the proposed non-conformal IPDGTD.

  5. Polycomb Group (PcG) Proteins and Human Cancers: Multifaceted Functions and Therapeutic Implications

    PubMed Central

    Wang, Wei; Qin, Jiang-Jiang; Voruganti, Sukesh; Nag, Subhasree; Zhou, Jianwei; Zhang, Ruiwen

    2016-01-01

    Polycomb group (PcG) proteins are transcriptional repressors that regulate several crucial developmental and physiological processes in the cell. More recently, they have been found to play important roles in human carcinogenesis and cancer development and progression. The deregulation and dysfunction of PcG proteins often lead to blocking or inappropriate activation of developmental pathways, enhancing cellular proliferation, inhibiting apoptosis, and increasing the cancer stem cell population. Genetic and molecular investigations of PcG proteins have long been focused on their PcG functions. However, PcG proteins have recently been shown to exert non-polycomb functions, contributing to the regulation of diverse cellular functions. We and others have demonstrated that PcG proteins regulate the expression and function of several oncogenes and tumor suppressor genes in a PcG-independent manner, and PcG proteins are associated with the survival of patients with cancer. In this review, we summarize the recent advances in the research on PcG proteins, including both the polycomb-repressive and non-polycomb functions. We specifically focus on the mechanisms by which PcG proteins play roles in cancer initiation, development, and progression. Finally, we discuss the potential value of PcG proteins as molecular biomarkers for the diagnosis and prognosis of cancer, and as molecular targets for cancer therapy. PMID:26227500

  6. Protein Crystal Growth (PCG) experiment aboard mission STS-66

    NASA Technical Reports Server (NTRS)

    2000-01-01

    On the Space Shuttle Orbiter Atlantis' middeck, Astronaut Joseph R. Tarner, mission specialist, works at an area amidst several lockers which support the Protein Crystal Growth (PCG) experiment during the STS-66 mission. This particular section is called the Crystal Observation System, housed in the Thermal Enclosure System (COS/TES). Together with the Vapor Diffusion Apparatus (VDA), housed in Single Locker Thermal Enclosure (SLTES), the COS/TES represents the continuing research into the structure of proteins and other macromolecules such as viruses.

  7. Astronaut Joseph R. Tanner works with PCG experiment on middeck

    NASA Image and Video Library

    1994-11-14

    On the Space Shuttle Atlantis' mid-deck, astronaut Joseph R. Tanner, mission specialist, works at area amidst several lockers onboard the Shuttle which support the Protein Crystal Growth (PCG) experiment. This particular section is called the Crystal Observation System, housed in the Thermal Enclosure System (COS/TES). Together with the Vapor Diffusion Apparatus (VDA), housed in a Single Locker Thermal Enclosure (SLTES) which is out of frame, the Cos/TES represents the continuing research into the structures of proteins and other macromolecules such as viruses.

  8. Protein Crystal Growth (PCG) experiment aboard mission STS-66

    NASA Technical Reports Server (NTRS)

    2000-01-01

    On the Space Shuttle Orbiter Atlantis' middeck, Astronaut Joseph R. Tarner, mission specialist, works at an area amidst several lockers which support the Protein Crystal Growth (PCG) experiment during the STS-66 mission. This particular section is called the Crystal Observation System, housed in the Thermal Enclosure System (COS/TES). Together with the Vapor Diffusion Apparatus (VDA), housed in Single Locker Thermal Enclosure (SLTES), the COS/TES represents the continuing research into the structure of proteins and other macromolecules such as viruses.

  9. Lamin A/C sustains PcG protein architecture, maintaining transcriptional repression at target genes

    PubMed Central

    Cesarini, Elisa; Mozzetta, Chiara; Marullo, Fabrizia; Gregoretti, Francesco; Gargiulo, Annagiusi; Columbaro, Marta; Cortesi, Alice; Antonelli, Laura; Di Pelino, Simona; Squarzoni, Stefano; Palacios, Daniela; Zippo, Alessio; Bodega, Beatrice; Oliva, Gennaro

    2015-01-01

    Beyond its role in providing structure to the nuclear envelope, lamin A/C is involved in transcriptional regulation. However, its cross talk with epigenetic factors—and how this cross talk influences physiological processes—is still unexplored. Key epigenetic regulators of development and differentiation are the Polycomb group (PcG) of proteins, organized in the nucleus as microscopically visible foci. Here, we show that lamin A/C is evolutionarily required for correct PcG protein nuclear compartmentalization. Confocal microscopy supported by new algorithms for image analysis reveals that lamin A/C knock-down leads to PcG protein foci disassembly and PcG protein dispersion. This causes detachment from chromatin and defects in PcG protein–mediated higher-order structures, thereby leading to impaired PcG protein repressive functions. Using myogenic differentiation as a model, we found that reduced levels of lamin A/C at the onset of differentiation led to an anticipation of the myogenic program because of an alteration of PcG protein–mediated transcriptional repression. Collectively, our results indicate that lamin A/C can modulate transcription through the regulation of PcG protein epigenetic factors. PMID:26553927

  10. Lamin A/C sustains PcG protein architecture, maintaining transcriptional repression at target genes.

    PubMed

    Cesarini, Elisa; Mozzetta, Chiara; Marullo, Fabrizia; Gregoretti, Francesco; Gargiulo, Annagiusi; Columbaro, Marta; Cortesi, Alice; Antonelli, Laura; Di Pelino, Simona; Squarzoni, Stefano; Palacios, Daniela; Zippo, Alessio; Bodega, Beatrice; Oliva, Gennaro; Lanzuolo, Chiara

    2015-11-09

    Beyond its role in providing structure to the nuclear envelope, lamin A/C is involved in transcriptional regulation. However, its cross talk with epigenetic factors--and how this cross talk influences physiological processes--is still unexplored. Key epigenetic regulators of development and differentiation are the Polycomb group (PcG) of proteins, organized in the nucleus as microscopically visible foci. Here, we show that lamin A/C is evolutionarily required for correct PcG protein nuclear compartmentalization. Confocal microscopy supported by new algorithms for image analysis reveals that lamin A/C knock-down leads to PcG protein foci disassembly and PcG protein dispersion. This causes detachment from chromatin and defects in PcG protein-mediated higher-order structures, thereby leading to impaired PcG protein repressive functions. Using myogenic differentiation as a model, we found that reduced levels of lamin A/C at the onset of differentiation led to an anticipation of the myogenic program because of an alteration of PcG protein-mediated transcriptional repression. Collectively, our results indicate that lamin A/C can modulate transcription through the regulation of PcG protein epigenetic factors.

  11. Astronaut Scott Parazynski works with PCG experiment on middeck

    NASA Image and Video Library

    1994-11-14

    STS066-13-029 (3-14 Nov 1994) --- On the Space Shuttle Atlantis' mid-deck, astronaut Scott E. Parazynski, mission specialist, works at one of two areas onboard the Shuttle which support the Protein Crystal Growth (PCG) experiment. This particular section is called the Vapor Diffusion Apparatus (VDA), housed in a Single Locker Thermal Enclosure (STES). Together with the Crystal Observation System, housed in the Thermal Enclosure System (COS/TES) the VDA represents the continuing research into the structures of proteins and other macromolecules such as viruses. In addition to using the microgravity of space to grow high-quality protein crystals for structural analyses, the experiments are expected to help develop technologies and methods to improve the protein crystallization process on Earth as well as in space.

  12. Exclusion of primary congenital glaucoma (PCG) from two candidate regions of chromosomes 1 and 6

    SciTech Connect

    Sarfarazi, M.; Akarsu, A.N.; Barsoum-Homsy, M.

    1994-09-01

    PCG is a genetically heterogeneous condition in which a significant proportion of families inherit in an autosomally recessive fashion. Although association of PCG with chromosomal abnormalities has been repeatedly reported in the literature, the chromosomal location of this condition is still unknown. Therefore, this study is designed to identify the chromosomal location of the PCG locus by positional mapping. We have identified 80 PCG families with a total of 261 potential informative meiosis. A group of 19 pedigrees with a minimum of 2 affected children in each pedigree and consanguinity in most of the parental generation were selected as our initial screening panel. This panel consists of a total of 44 affected and 93 unaffected individuals giving a total of 99 informative meiosis, including 5 phase-known. We used polymerase chain reaction (PCR), denaturing polyacrylamide gels and silver staining to genotype our families. We first screened for markers on 1q21-q31, the reported location for juvenile primary open-angle glaucoma and excluded a region of 30 cM as the likely site for the PCG locus. Association of PCG with both ring chromosome 6 and HLA-B8 has also been reported. Therefore, we genotyped our PCG panel with PCR applicable markers from 6p21. Significant negative lod scores were obtained for D6S105 (Z = -18.70) and D6S306 (Z = -5.99) at {theta}=0.001. HLA class 1 region has also contained one of the tubulin genes (TUBB) which is an obvious candidate for PCG. Study of this gene revealed a significant negative lod score with PCG (Z = -16.74, {theta}=0.001). A multipoint linkage analysis of markers in this and other regions containing the candidate genes will be presented.

  13. Plots, Calculations and Graphics Tools (PCG2). Software Transfer Request Presentation

    NASA Technical Reports Server (NTRS)

    Richardson, Marilou R.

    2010-01-01

    This slide presentation reviews the development of the Plots, Calculations and Graphics Tools (PCG2) system. PCG2 is an easy to use tool that provides a single user interface to view data in a pictorial, tabular or graphical format. It allows the user to view the same display and data in the Control Room, engineering office area, or remote sites. PCG2 supports extensive and regular engineering needs that are both planned and unplanned and it supports the ability to compare, contrast and perform ad hoc data mining over the entire domain of a program's test data.

  14. pulver: an R package for parallel ultra-rapid p-value computation for linear regression interaction terms.

    PubMed

    Molnos, Sophie; Baumbach, Clemens; Wahl, Simone; Müller-Nurasyid, Martina; Strauch, Konstantin; Wang-Sattler, Rui; Waldenberger, Melanie; Meitinger, Thomas; Adamski, Jerzy; Kastenmüller, Gabi; Suhre, Karsten; Peters, Annette; Grallert, Harald; Theis, Fabian J; Gieger, Christian

    2017-09-29

    Genome-wide association studies allow us to understand the genetics of complex diseases. Human metabolism provides information about the disease-causing mechanisms, so it is usual to investigate the associations between genetic variants and metabolite levels. However, only considering genetic variants and their effects on one trait ignores the possible interplay between different "omics" layers. Existing tools only consider single-nucleotide polymorphism (SNP)-SNP interactions, and no practical tool is available for large-scale investigations of the interactions between pairs of arbitrary quantitative variables. We developed an R package called pulver to compute p-values for the interaction term in a very large number of linear regression models. Comparisons based on simulated data showed that pulver is much faster than the existing tools. This is achieved by using the correlation coefficient to test the null-hypothesis, which avoids the costly computation of inversions. Additional tricks are a rearrangement of the order, when iterating through the different "omics" layers, and implementing this algorithm in the fast programming language C++. Furthermore, we applied our algorithm to data from the German KORA study to investigate a real-world problem involving the interplay among DNA methylation, genetic variants, and metabolite levels. The pulver package is a convenient and rapid tool for screening huge numbers of linear regression models for significant interaction terms in arbitrary pairs of quantitative variables. pulver is written in R and C++, and can be downloaded freely from CRAN at https://cran.r-project.org/web/packages/pulver/ .

  15. ParaHaplo: A program package for haplotype-based whole-genome association study using parallel computing.

    PubMed

    Misawa, Kazuharu; Kamatani, Naoyuki

    2009-10-21

    Since more than a million single-nucleotide polymorphisms (SNPs) are analyzed in any given genome-wide association study (GWAS), performing multiple comparisons can be problematic. To cope with multiple-comparison problems in GWAS, haplotype-based algorithms were developed to correct for multiple comparisons at multiple SNP loci in linkage disequilibrium. A permutation test can also control problems inherent in multiple testing; however, both the calculation of exact probability and the execution of permutation tests are time-consuming. Faster methods for calculating exact probabilities and executing permutation tests are required. We developed a set of computer programs for the parallel computation of accurate P-values in haplotype-based GWAS. Our program, ParaHaplo, is intended for workstation clusters using the Intel Message Passing Interface (MPI). We compared the performance of our algorithm to that of the regular permutation test on JPT and CHB of HapMap. ParaHaplo can detect smaller differences between 2 populations than SNP-based GWAS. We also found that parallel-computing techniques made ParaHaplo 100-fold faster than a non-parallel version of the program. ParaHaplo is a useful tool in conducting haplotype-based GWAS. Since the data sizes of such projects continue to increase, the use of fast computations with parallel computing--such as that used in ParaHaplo--will become increasingly important. The executable binaries and program sources of ParaHaplo are available at the following address: http://sourceforge.jp/projects/parallelgwas/?_sl=1.

  16. CtBP Levels Control Intergenic Transcripts, PHO/YY1 DNA Binding, and PcG Recruitment to DNA

    PubMed Central

    Basu, Arindam; Atchison, Michael L.

    2013-01-01

    Carboxy-terminal binding protein (CtBP) is a well-known corepressor of several DNA binding transcription factors in Drosophila as well as in mammals. CtBP is implicated in Polycomb Group (PcG) complex-mediated transcriptional repression because it can bind to some PcG proteins, and mutation of the ctbp gene in flies results in lost PcG protein recruitment to Polycomb Response Elements (PREs) and lost PcG repression. However, the mechanism of reduced PcG DNA binding in CtBP mutant backgrounds is unknown. We show here that in a Drosophila CtBP mutant background, intergenic transcripts are induced across several PRE sequences and this corresponds to reduced DNA binding by PcG proteins Pleiohomeotic (PHO) and Polycomb (Pc), and reduced trimethylation of histone H3 on lysine 27, a hallmark of PcG repression. Restoration of CtBP levels by expression of a CtBP transgene results in repression of intergenic transcripts, restored PcG binding, and elevated trimethylation of H3 on lysine 27. Our results support a model in which CtBP regulates expression of intergenic transcripts that controls DNA binding by PcG proteins and subsequent histone modifications and transcriptional activity. PMID:20082324

  17. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.

    2012-06-01

    BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods

  18. Iterative methods for the WLS state estimation on RISC, vector, and parallel computers

    SciTech Connect

    Nieplocha, J.; Carroll, C.C.

    1993-10-01

    We investigate the suitability and effectiveness of iterative methods for solving the weighted-least-square (WLS) state estimation problem on RISC, vector, and parallel processors. Several of the most popular iterative methods are tested and evaluated. The best performing preconditioned conjugate gradient (PCG) is very well suited for vector and parallel processing as is demonstrated for the WLS state estimation of the IEEE standard test systems. A new sparse matrix format for the gain matrix improves vector performance of the PCG algorithm and makes it competitive to the direct solver. Internal parallelism in RISC processors, used in current multiprocessor systems, can be taken advantage of in an implementation of this algorithm.

  19. PRECONDITIONED CONJUGATE-GRADIENT 2 (PCG2), a computer program for solving ground-water flow equations

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    This report documents PCG2 : a numerical code to be used with the U.S. Geological Survey modular three-dimensional, finite-difference, ground-water flow model . PCG2 uses the preconditioned conjugate-gradient method to solve the equations produced by the model for hydraulic head. Linear or nonlinear flow conditions may be simulated. PCG2 includes two reconditioning options : modified incomplete Cholesky preconditioning, which is efficient on scalar computers; and polynomial preconditioning, which requires less computer storage and, with modifications that depend on the computer used, is most efficient on vector computers . Convergence of the solver is determined using both head-change and residual criteria. Nonlinear problems are solved using Picard iterations. This documentation provides a description of the preconditioned conjugate gradient method and the two preconditioners, detailed instructions for linking PCG2 to the modular model, sample data inputs, a brief description of PCG2, and a FORTRAN listing.

  20. FDSTools: A software package for analysis of massively parallel sequencing data with the ability to recognise and correct STR stutter and other PCR or sequencing noise.

    PubMed

    Hoogenboom, Jerry; van der Gaag, Kristiaan J; de Leeuw, Rick H; Sijen, Titia; de Knijff, Peter; Laros, Jeroen F J

    2017-03-01

    Massively parallel sequencing (MPS) is on the advent of a broad scale application in forensic research and casework. The improved capabilities to analyse evidentiary traces representing unbalanced mixtures is often mentioned as one of the major advantages of this technique. However, most of the available software packages that analyse forensic short tandem repeat (STR) sequencing data are not well suited for high throughput analysis of such mixed traces. The largest challenge is the presence of stutter artefacts in STR amplifications, which are not readily discerned from minor contributions. FDSTools is an open-source software solution developed for this purpose. The level of stutter formation is influenced by various aspects of the sequence, such as the length of the longest uninterrupted stretch occurring in an STR. When MPS is used, STRs are evaluated as sequence variants that each have particular stutter characteristics which can be precisely determined. FDSTools uses a database of reference samples to determine stutter and other systemic PCR or sequencing artefacts for each individual allele. In addition, stutter models are created for each repeating element in order to predict stutter artefacts for alleles that are not included in the reference set. This information is subsequently used to recognise and compensate for the noise in a sequence profile. The result is a better representation of the true composition of a sample. Using Promega Powerseq™ Auto System data from 450 reference samples and 31 two-person mixtures, we show that the FDSTools correction module decreases stutter ratios above 20% to below 3%. Consequently, much lower levels of contributions in the mixed traces are detected. FDSTools contains modules to visualise the data in an interactive format allowing users to filter data with their own preferred thresholds. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Infrared detection of exposed Carbon Dioxide ice on 67P/CG nucleus surface by Rosetta-VIRTIS

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Raponi, Andrea; Capaccioni, Fabrizio; Barucci, Maria Antonietta; De Sanctis, Maria Cristina; Fornasier, Sonia; Ciarniello, Mauro; Migliorini, Alessandra; Erard, Stephane; Bockelee-Morvan, Dominique; Leyrat, Cedric; Tosi, Federico; Piccioni, Giuseppe; Palomba, Ernesto; Capria, Maria Teresa; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Taylor, Fred W.; Kappel, David

    2016-04-01

    In the period August 2014 - early May 2015 the heliocentric distance of the nucleus of 67P/CG decreased from 3.62 to 1.71 AU and the subsolar point moved towards the southern hemisphere. We investigated the IR spectra obtained by the Rosetta/VIRTIS instrument close to the newly illuminated regions, where colder conditions were present and consequently higher chances to observe highly volatility ices than water. We report about the discovery of CO2 ice identified in a region of the nucleus that recently passed through terminator. The quantitative abundance has been determined by means of spectral modeling of H2O-CO2 icy grains mixed to dark terrains as done in Filacchione et al., Nature, 10.1038/nature16190. The CO2 ice has been identified in an area in Anhur with abundance reaching up to 1.6% mixed with dark terrain. It is interesting to note that CO2 ice has been observed only for a short transient period of time, possibly demonstrating the seasonal nature of the presence of CO2 at the surface. A parallel study on the water and carbon dioxide gaseous emissions in the coma above this volatile-rich area is reported by Migliorini et al., this conference.

  2. The global surface composition of 67P/CG nucleus by Rosetta/VIRTIS. (I) Prelanding mission phase

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; Tosi, Federico; De Sanctis, Maria Cristina; Erard, Stéphane; Morvan, Dominique Bockelée; Leyrat, Cedric; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Piccioni, Giuseppe; Migliorini, Alessandra; Capria, Maria Teresa; Palomba, Ernesto; Cerroni, Priscilla; Longobardo, Andrea; Barucci, Antonella; Fornasier, Sonia; Carlson, Robert W.; Jaumann, Ralf; Stephan, Katrin; Moroz, Lyuba V.; Kappel, David; Rousseau, Batiste; Fonti, Sergio; Mancarella, Francesca; Despan, Daniela; Faure, Mathilde

    2016-08-01

    . The parallel coordinates method (Inselberg [1985] Vis. Comput., 1, 69-91) has been used to identify associations between average values of the spectral indicators and the properties of the geomorphological units as defined by (Thomas et al., [2015] Science, 347, 6220) and (El-Maarr et al., [2015] Astron. Astrophys., 583, A26). Three classes have been identified (smooth/active areas, dust covered areas and depressions), which can be clustered on the basis of the 3.2 μm organic material's band depth, while consolidated terrains show a high variability of the spectral properties resulting being distributed across all three classes. These results show how the spectral variability of the nucleus surface is more variegated than the morphological classes and that 67P/CG surface properties are dynamical, changing with the heliocentric distance and with activity processes.

  3. The internal density distribution of comet 67P/C-G based on 3D models

    NASA Astrophysics Data System (ADS)

    Jorda, Laurent; Faurschou Hviid, Stubbe; Capanna, Claire; Gaskell, Robert W.; Gutiérrez, Pedro; Preusker, Frank; Scholten, Frank; Rodionov, Sergey; OSIRIS Team

    2016-10-01

    The OSIRIS camera aboard the Rosetta spacecraft observed the nucleus of comet 67P/C-G from the mapping phase in summer 2014 until now. The images have allowed the reconstruction in three-dimension of nucleus surface with stereophotogrammetry (Preusker et al., Astron. Astrophys.) and stereophotoclinometry (Jorda et al., Icarus) techniques. We use the reconstructed models to constrain the internal density distribution based on: (i) the measurement of the offset between the center of mass and the center of figure of the object, and (ii) the assumption that flat areas observed at the surface of the comet correspond to iso-gravity surfaces. The results of our analysis will be presented, and the consequences for the internal structure and formation of the nucleus of comet 67P/C-G will be discussed.

  4. The internal density distribution of comet 67P/C-G based on 3D models

    NASA Astrophysics Data System (ADS)

    Jorda, Laurent; Hviid, Stubbe; Capanna, Claire; Gaskell, Robert; Gutierrez, Pedro; Preusker, Frank; Rodionov, Sergey; Scholten, Frank

    2016-04-01

    The OSIRIS camera aboard the Rosetta spacecraft observed the nucleus of comet 67P/C-G from the mapping phase in summer 2014 until now. The images have allowed the reconstruction in three-dimension of nucleus surface with stereophotogrammetry (Preusker et al., Astron. Astrophys.) and stereophotoclinometry (Jorda et al., submitted to Icarus) techniques. We use the reconstructed models to constrain the internal density distribution based on: (i) the measurement of the offset between the center of mass and center of figure of the object, and (ii) the assumption that flat areas observed at the surface of the comet correspond to iso-gravity surfaces. The results of our analysis will be presented, and the consequences for the internal structure and formation of the nucleus of comet 67P/C-G will be discussed.

  5. Cosmochemical implications of CONSERT permittivity characterization of 67P/CG

    NASA Astrophysics Data System (ADS)

    Herique, A.; Kofman, W.; Beck, P.; Bonal, L.; Buttarazzi, I.; Heggy, E.; Lasue, J.; Levasseur-Regourd, A. C.; Quirico, E.; Zine, S.

    2016-11-01

    Analysis of the propagation of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) signal throughout the small lobe of the 67P/CG nucleus has permitted us to deduce the real part of the permittivity, at a value of 1.27 ± 0.05. The first interpretation of this value, using the dielectric properties of mixtures of ices (H2O, CO2), refractories (i.e. dust) and porosity, led to the conclusion that the comet porosity lies in the range 75-85 per cent. In addition, the dust-to-ice ratio was found to range between 0.4 and 2.6 and the permittivity of dust (including 30 per cent porosity) was determined to be lower than 2.9. This last value corresponds to a permittivity lower than 4 for a material without any porosity. This article is intended to refine the dust permittivity estimate by taking into account updated values of the nucleus densities and dust/ice ratio and to provide further insights into the nature of the constituents of comet 67P/CG. We adopted a systematic approach: determination of the dust permittivity as a function of the volume fraction of ice, dust and vacuum (i.e. porosity) and comparison with the permittivity of meteoritic, mineral and organic materials from literature and laboratory measurements. Then different composition models of the nuclei corresponding to cosmochemical end members of 67P/CG dust are tested. For each of these models, the location in the ice/dust/vacuum ternary diagram is calculated based on available dielectric measurements and confronted to the locus of 67P/CG. The number of compliant models is small and the cosmochemical implications of each of them is discussed, to conclude regarding a preferred model.

  6. Diamagnetic region(s): structure of the unmagnetized plasma around Comet 67P/CG

    NASA Astrophysics Data System (ADS)

    Henri, P.; Vallières, X.; Hajra, R.; Goetz, C.; Richter, I.; Glassmeier, K.-H.; Galand, M.; Rubin, M.; Eriksson, A. I.; Nemeth, Z.; Vigren, E.; Beth, A.; Burch, J. L.; Carr, C.; Nilsson, H.; Tsurutani, B.; Wattieaux, G.

    2017-07-01

    The ESA's comet chaser Rosetta has monitored the evolution of the ionized atmosphere of comet 67P/Churyumov-Gerasimenko (67P/CG) and its interaction with the solar wind, during more than 2 yr. Around perihelion, while the cometary outgassing rate was highest, Rosetta crossed hundreds of unmagnetized regions, but did not seem to have crossed a large-scale diamagnetic cavity as anticipated. Using in situ Rosetta observations, we characterize the structure of the unmagnetized plasma found around comet 67P/CG. Plasma density measurements from RPC-MIP are analysed in the unmagnetized regions identified with RPC-MAG. The plasma observations are discussed in the context of the cometary escaping neutral atmosphere, observed by ROSINA/COPS. The plasma density in the different diamagnetic regions crossed by Rosetta ranges from ˜100 to ˜1500 cm-3. They exhibit a remarkably systematic behaviour that essentially depends on the comet activity and the cometary ionosphere expansion. An effective total ionization frequency is obtained from in situ observations during the high outgassing activity phase of comet 67P/CG. Although several diamagnetic regions have been crossed over a large range of distances to the comet nucleus (from 50 to 400 km) and to the Sun (1.25-2.4 au), in situ observations give strong evidence for a single diamagnetic region, located close to the electron exobase. Moreover, the observations are consistent with an unstable contact surface that can locally extend up to about 10 times the electron exobase.

  7. Concerted epigenetic signatures inheritance at PcG targets through replication.

    PubMed

    Lanzuolo, Chiara; Lo Sardo, Federica; Orlando, Valerio

    2012-04-01

    Polycomb group of proteins (PcG), by controlling gene silencing transcriptional programs through cell cycle, lock cell identity and memory. Recent chromatin genome-wide studies indicate that PcG targets sites are bivalent domains with overlapping repressive H3K27me3 and active H3K4me3 mark domains. During S phase, the stability of epigenetic signatures is challenged by the replication fork passage. Hence, specific mechanisms of epigenetic inheritance might be provided to preserve epigenome structures. Recently, we have identified a critical time window before replication, during which high levels of PcG binding and histone marks on BX-C PRE target sites set the stage for subsequent dilution of epigenomic components, allowing proper transmission of epigenetic signatures to the next generation. Here, we extended this analysis to promoter elements, showing the same mechanism of inheritance. Furthermore, to gain insight into the inheritance of PREs bivalent marks, we analyzed dynamics of H3K4me3 deposition, a mark that correlates with transcriptionally active chromatin. Likewise, we found an early S-phase enrichment of H3K4me3 mark preceding the replication-dependent dilution. This evidence suggests that all epigenetic marks are inherited simultaneously to ensure their correct propagation through replication and to protect the "bivalency" of PREs.

  8. Concerted epigenetic signatures inheritance at PcG targets through replication

    PubMed Central

    Lanzuolo, Chiara; Lo Sardo, Federica; Orlando, Valerio

    2012-01-01

    Polycomb group of proteins (PcG), by controlling gene silencing transcriptional programs through cell cycle, lock cell identity and memory. Recent chromatin genome-wide studies indicate that PcG targets sites are bivalent domains with overlapping repressive H3K27me3 and active H3K4me3 mark domains. During S phase, the stability of epigenetic signatures is challenged by the replication fork passage. Hence, specific mechanisms of epigenetic inheritance might be provided to preserve epigenome structures. Recently, we have identified a critical time window before replication, during which high levels of PcG binding and histone marks on BX-C PRE target sites set the stage for subsequent dilution of epigenomic components, allowing proper transmission of epigenetic signatures to the next generation. Here, we extended this analysis to promoter elements, showing the same mechanism of inheritance. Furthermore, to gain insight into the inheritance of PREs bivalent marks, we analyzed dynamics of H3K4me3 deposition, a mark that correlates with transcriptionally active chromatin. Likewise, we found an early S-phase enrichment of H3K4me3 mark preceding the replication-dependent dilution. This evidence suggests that all epigenetic marks are inherited simultaneously to ensure their correct propagation through replication and to protect the “bivalency” of PREs. PMID:22421150

  9. Cosmochemical implications of CONSERT permittivity characterization of 67P/C-G

    NASA Astrophysics Data System (ADS)

    Levasseur-Regourd, A.; Hérique, Alain; Kofman, Wlodek; Beck, Pierre; Bonal, Lydie; Buttarazzi, Ilaria; Heggy, Essam; Lasue, Jeremie; Quirico, Eric; Zine, Sonia

    2016-10-01

    Unique information about the internal structure of the nucleus of comet 67P/C-G was provided by the CONSERT bistatic radar on-board Rosetta and Philae [1]. Analysis of the propagation of its signal throughout the small lobe indicated that the real part of the permittivity at 90 MHz is of (1.27±0.05). The first interpretation of this value using dielectric properties of mixtures of dust and ices (H2O, CO2), led to the conclusion that the comet porosity ranges between 75-85%. In addition, the dust/ice ratio was found to range between 0.4-2.6 and the permittivity of dust (including 30% of porosity) was determined to be lower than 2.9.The dust permittivity estimate is now reduced by taking into account the updated values of nucleus density and of dust/ice ratio, in order of providing further insights into the nature of the constituents of comet 67P/C-G [2]. We adopt a systematic approach: i) determination of the dust permittivity as a function of the ice (I) to dust (D) and vacuum (V) volume fraction; ii) comparison with the permittivity of meteoritic, mineral and organic materials from literature and laboratory measurements; iii) test of several composition models of the nucleus, corresponding to cosmochemical end members of 67P/C-G. For each of these models the location in the ternary I/D/V diagram is calculated based on available dielectric measurements, and confronted to the locus of 67P/C-G. The number of compliant models is small and the cosmochemical implications of each are discussed [2]. An important fraction of carbonaceous material is required in the dust in order to match CONSERT permittivity observations, establishing that comets represent a massive carbon reservoir.Support from Centre National d'Études Spatiales (CNES, France) for this work, based on observations with CONSERT on board Rosetta, is acknowledged. The CONSERT instrument was designed, built and operated by IPAG, LATMOS and MPS and was financially supported by CNES, CNRS, UJF/UGA, DLR and MPS

  10. PcG Proteins, DNA Methylation, and Gene Repression by Chromatin Looping

    PubMed Central

    Tiwari, Vijay K; McGarvey, Kelly M; Licchesi, Julien D.F; Ohm, Joyce E; Herman, James G; Schübeler, Dirk; Baylin, Stephen B

    2008-01-01

    Many DNA hypermethylated and epigenetically silenced genes in adult cancers are Polycomb group (PcG) marked in embryonic stem (ES) cells. We show that a large region upstream (∼30 kb) of and extending ∼60 kb around one such gene, GATA-4, is organized—in Tera-2 undifferentiated embryonic carcinoma (EC) cells—in a topologically complex multi-loop conformation that is formed by multiple internal long-range contact regions near areas enriched for EZH2, other PcG proteins, and the signature PcG histone mark, H3K27me3. Small interfering RNA (siRNA)–mediated depletion of EZH2 in undifferentiated Tera-2 cells leads to a significant reduction in the frequency of long-range associations at the GATA-4 locus, seemingly dependent on affecting the H3K27me3 enrichments around those chromatin regions, accompanied by a modest increase in GATA-4 transcription. The chromatin loops completely dissolve, accompanied by loss of PcG proteins and H3K27me3 marks, when Tera-2 cells receive differentiation signals which induce a ∼60-fold increase in GATA-4 expression. In colon cancer cells, however, the frequency of the long-range interactions are increased in a setting where GATA-4 has no basal transcription and the loops encompass multiple, abnormally DNA hypermethylated CpG islands, and the methyl-cytosine binding protein MBD2 is localized to these CpG islands, including ones near the gene promoter. Removing DNA methylation through genetic disruption of DNA methyltransferases (DKO cells) leads to loss of MBD2 occupancy and to a decrease in the frequency of long-range contacts, such that these now more resemble those in undifferentiated Tera-2 cells. Our findings reveal unexpected similarities in higher order chromatin conformation between stem/precursor cells and adult cancers. We also provide novel insight that PcG-occupied and H3K27me3-enriched regions can form chromatin loops and physically interact in cis around a single gene in mammalian cells. The loops associate with a

  11. Structure and dynamics of the umagnetized plasma around comet 67P/CG

    NASA Astrophysics Data System (ADS)

    Henri, P.; Vallières, X.; Gilet, N.; Hajra, R.; Moré, J.; Goetz, C.; Richter, I.; Glassmeier, K. H.; Galand, M. F.; Heritier, K. L.; Eriksson, A. I.; Nemeth, Z.; Tsurutani, B.; Rubin, M.; Altwegg, K.

    2016-12-01

    At distances close enough to the Sun, when comets are characterised by a significant outgassing, the cometary neutral density may become large enough for both the cometary plasma and the cometary gas to be coupled, through ion-neutral and electron-neutral collisions. This coupling enables the formation of an unmagnetised expanding cometary ionosphere around the comet nucleus, also called diamagnetic cavity, within which the solar wind magnetic field cannot penetrate. The instruments of the Rosetta Plasma Consortium (RPC), onboard the Rosetta Orbiter, enable us to better constrain the structure, dynamics and stability of the plasma around comet 67P/CG. Recently, magnetic field measurements (RPC-MAG) have shown the existence of such a diamagnetic region around comet 67P/CG [Götz et al., 2016]. Contrary to a single, large scale, diamagnetic cavity such as what was observed around comet Halley, Rosetta have crossed several diamagnetic structures along its trajectory around comet 67P/CG. Using electron density measurements from the Mutual Impedance Probe (RPC-MIP) during the different diamagnetic cavity crossings, identified by the flux gate magnetometer (RPC-MAG), we map the unmagnetised plasma density around comet 67P/CG. Our aims is to better constrain the structure, dynamics and stability of this inner cometary plasma layer characterised by cold electrons (as witnessed by the Langmuir Probes RPC-LAP). The ionisation ratio in these unmagnetised region(s) is computed from the measured electron (RPC-MIP) and neutral gas (ROSINA/COPS) densities. In order to assess the importance of solar EUV radiation as a source of ionisation, the observed electron density will be compared to a the density expected from an ionospheric model taking into account solar radiation absorption. The crossings of diamagnetic region(s) by Rosetta show that the unmagnetised cometary plasma is particularly homogeneous, compared to the highly dynamical magnetised plasma observed in adjacent

  12. Identification and Characterization of γ-Aminobutyric Acid Uptake System GabPCg (NCgl0464) in Corynebacterium glutamicum

    PubMed Central

    Zhao, Zhi; Ma, Wen-hua; Zhou, Ning-Yi

    2012-01-01

    Corynebacterium glutamicum is widely used for industrial production of various amino acids and vitamins, and there is growing interest in engineering this bacterium for more commercial bioproducts such as γ-aminobutyric acid (GABA). In this study, a C. glutamicum GABA-specific transporter (GabPCg) encoded by ncgl0464 was identified and characterized. GabPCg plays a major role in GABA uptake and is essential to C. glutamicum growing on GABA. GABA uptake by GabPCg was weakly competed by l-Asn and l-Gln and stimulated by sodium ion (Na+). The Km and Vmax values were determined to be 41.1 ± 4.5 μM and 36.8 ± 2.6 nmol min−1 (mg dry weight [DW])−1, respectively, at pH 6.5 and 34.2 ± 1.1 μM and 67.3 ± 1.0 nmol min−1 (mg DW)−1, respectively, at pH 7.5. GabPCg has 29% amino acid sequence identity to a previously and functionally identified aromatic amino acid transporter (TyrP) of Escherichia coli but low identities to the currently known GABA transporters (17% and 15% to E. coli GabP and Bacillus subtilis GabP, respectively). The mutant RES167 Δncgl0464/pGXKZ9 with the GabPCg deletion showed 12.5% higher productivity of GABA than RES167/pGXKZ9. It is concluded that GabPCg represents a new type of GABA transporter and is potentially important for engineering GABA-producing C. glutamicum strains. PMID:22307305

  13. Electronic Packaging Techniques

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A characteristic of aerospace system design is that equipment size and weight must always be kept to a minimum, even in small components such as electronic packages. The dictates of spacecraft design have spawned a number of high-density packaging techniques, among them methods of connecting circuits in printed wiring boards by processes called stitchbond welding and parallel gap welding. These processes help designers compress more components into less space; they also afford weight savings and lower production costs.

  14. Compositional maps of 67P/CG nucleus surface after perihelion passage by Rosetta/VIRTIS

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Ciarniello, M.; Capaccioni, F.; Raponi, A.; De Sanctis, M. C.; Tosi, F.; Migliorini, Alessandra; Piccioni, G.; Cerroni, P.; Capria, M. T.; Erard, S.; Bockelee-Morvan, D.; Leyrat, C.; Arnold, G.; Barucci, M. A.; Schmitt, B.; Quirico, E.

    2016-11-01

    Moving after perihelion passage (August 13th 2015), VIRTIS-M the 0.25-5.0 μm imaging spectrometer on board Rosetta has mapped again the north and equatorial regions of 67P/CG's nucleus with the scope to trace color and composition evolution of the surface. With the loss of the IR channel due to the active cryogenic cooler failure occurred in May 2015, VIRTIS-M has observed only with the VIS channel in the 0.25-1.0 μm spectral range. Despite this limitation, the returned data are valuable in performing a comparison of surface properties between pre and post-perihelion times. Approaching perihelion passage, 67P/CG's nucleus has experienced a general brightening due to the removal of the surficial dust layer caused by the more intense gaseous activity with the consequent exposure of a larger fraction of water ice. Coma observations by VIRTIS during pre-perihelion have shown a correlation between the areas of the nucleus where gaseous activity by water ice sublimation is more intense with the surface brightening caused by dust removal. After having applied data calibration and photometric correction, VIRTIS data are projected on the irregularly shaped digital model6 of 67P/CG with the aim to derive visible albedo and colors maps rendered with a spatial resolution of 0.5×0.5 deg in latitude-longitude, corresponding to a sampling of about 15 m/pixel. Dedicated mapping sequences executed at different heliocentric distances, are employed to follow the dynamical evolution of the surface. Direct comparison between compositional maps obtained at the same heliocentric distances along inbound and outbound orbits allows to evidence the changes occurred on the same areas of the surface. In this context, the first VIRTIS-M maps, obtained in August 2014 at heliocentric distance of 3.4 AU along the inbound orbit with a solar phase angle of about 30-45° are compared with the last ones, taken in June 2016 at 3.2 AU from the Sun on the outbound trajectory at solar phases of about

  15. Drosophila melanogaster dHCF interacts with both PcG and TrxG epigenetic regulators.

    PubMed

    Rodriguez-Jato, Sara; Busturia, Ana; Herr, Winship

    2011-01-01

    Repression and activation of gene transcription involves multiprotein complexes that modify chromatin structure. The integration of these complexes at regulatory sites can be assisted by co-factors that link them to DNA-bound transcriptional regulators. In humans, one such co-factor is the herpes simplex virus host-cell factor 1 (HCF-1), which is implicated in both activation and repression of transcription. We show here that disruption of the gene encoding the Drosophila melanogaster homolog of HCF-1, dHCF, leads to a pleiotropic phenotype involving lethality, sterility, small size, apoptosis, and morphological defects. In Drosophila, repressed and activated transcriptional states of cell fate-determining genes are maintained throughout development by Polycomb Group (PcG) and Trithorax Group (TrxG) genes, respectively. dHCF mutant flies display morphological phenotypes typical of TrxG mutants and dHCF interacts genetically with both PcG and TrxG genes. Thus, dHCF inactivation enhances the mutant phenotypes of the Pc PcG as well as brm and mor TrxG genes, suggesting that dHCF possesses Enhancer of TrxG and PcG (ETP) properties. Additionally, dHCF interacts with the previously established ETP gene skd. These pleiotropic phenotypes are consistent with broad roles for dHCF in both activation and repression of transcription during fly development.

  16. PCG: A prototype incremental compilation facility for the SAGA environment, appendix F

    NASA Technical Reports Server (NTRS)

    Kimball, Joseph John

    1985-01-01

    A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.

  17. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  18. Polycomb (PcG) Proteins, BMI1 and SUZ12, Regulate Arsenic-induced Cell Transformation*

    PubMed Central

    Kim, Hong-Gyum; Kim, Dong Joon; Li, Shengqing; Lee, Kun Yeong; Li, Xiang; Bode, Ann M.; Dong, Zigang

    2012-01-01

    Inorganic arsenic is a well-documented human carcinogen associated with cancers of the skin, lung, liver, and bladder. However, the underlying mechanisms explaining the tumorigenic role of arsenic are not well understood. The present study explored a potential mechanism of cell transformation induced by arsenic exposure. Exposure to a low dose (0.5 μm) of arsenic trioxide (As2O3) caused transformation of BALB/c 3T3 cells. In addition, in a xenograft mouse model, tumor growth of the arsenic-induced transformed cells was dramatically increased. In arsenic-induced transformed cells, polycomb group (PcG) proteins, including BMI1 and SUZ12, were activated resulting in enhanced histone H3K27 tri-methylation levels. On the other hand, tumor suppressor p16INK4a and p19ARF mRNA and protein expression were dramatically suppressed. Introduction of small hairpin (sh) RNA-BMI1 or -SUZ12 into BALB/c 3T3 cells resulted in suppression of arsenic-induced transformation. Histone H3K27 tri-methylation returned to normal in BMI1- or SUZ12-knockdown BALB/c 3T3 cells compared with BMI1- or SUZ12-wildtype cells after arsenic exposure. As a consequence, the expression of p16INK4a and p19ARF was recovered in arsenic-treated BMI1- or SUZ12-knockdown cells. Thus, arsenic-induced cell transformation was blocked by inhibition of PcG function. Taken together, these results strongly suggest that the polycomb proteins, BMI1 and SUZ12 are required for cell transformation induced by organic arsenic exposure. PMID:22843710

  19. Jpetra Kernel Package

    SciTech Connect

    Heroux, Michael A.

    2004-03-01

    A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs, written in Java. Jpetra is intended to provide the foundation for basic matrix and vector operations for Java developers. Jpetra provides distributed memory operations via an abstract parallel machine interface. The most common implementation of this interface will be Java sockets.

  20. Disruptive collisions as the origin of 67P/C-G and small bilobate comets

    NASA Astrophysics Data System (ADS)

    Michel, Patrick; Schwartz, Stephen R.; Jutzi, Martin; Marchi, Simone; Richardson, Derek C.; Zhang, Yun

    2016-10-01

    Images of comets sent by spacecraft have shown us that bilobate shapes seem to be common in the cometary population. This has been most recently evidenced by the images of comet 67P/C-G obtained by the ESA Rosetta mission, which show a low-density elongated body interpreted as a contact binary. The origin of such bilobate comets has been thought to be primordial because it requires the slow accretion of two bodies that become the two main components of the final object. However, slow accretion does not only occur during the primordial phase of the Solar System, but also later during the reaccumulation processes immediately following collisional disruptions of larger bodies. We perform numerical simulations of disruptions of large bodies. We demonstrate that during the ensuing gravitational phase, in which the generated fragments interact under their mutual gravity, aggregates with bi-lobed or elongated shapes formed form by reaccumulation at speeds that are at or below the range of those assumed in primordial accretion scenarios [1]. The same scenario has been demonstrated to occur in the asteroid belt to explain the origin of asteroid families [2] and has provided insight into the shapes of thus-far observed asteroids such as 25143 Itokawa [3]. Here we show that it is also a more general outcome that applies to disruption events in the outer Solar System. Moreover, we show that high temperature regions are very localized during the impact process, which solves the problem of the survival of organics and volatiles in the collisional process. The advantage of this scenario for the formation of small bilobate shapes, including 67P/C-G, is that it does not necessitate a primordial origin, as such disruptions can occur at later stages of the Solar System. This demonstrates how such comets can be relatively young, consistent with other studies that show that these shapes are unlikely to be formed early on and survive the entire history of the Solar System [4

  1. Growing protein crystals in microgravity - The NASA Microgravity Science and Applications Division (MSAD) Protein Crystal Growth (PCG) program

    NASA Technical Reports Server (NTRS)

    Herren, B.

    1992-01-01

    In collaboration with a medical researcher at the University of Alabama at Birmingham, NASA's Marshall Space Flight Center in Huntsville, Alabama, under the sponsorship of the Microgravity Science and Applications Division (MSAD) at NASA Headquarters, is continuing a series of space experiments in protein crystal growth which could lead to innovative new drugs as well as basic science data on protein molecular structures. From 1985 through 1992, Protein Crystal Growth (PCG) experiments will have been flown on the Space Shuttle a total of 14 times. The first four hand-held experiments were used to test hardware concepts; later flights incorporated these concepts for vapor diffusion protein crystal growth with temperature control. This article provides an overview of the PCG program: its evolution, objectives, and plans for future experiments on NASA's Space Shuttle and Space Station Freedom.

  2. Growing protein crystals in microgravity - The NASA Microgravity Science and Applications Division (MSAD) Protein Crystal Growth (PCG) program

    NASA Technical Reports Server (NTRS)

    Herren, B.

    1992-01-01

    In collaboration with a medical researcher at the University of Alabama at Birmingham, NASA's Marshall Space Flight Center in Huntsville, Alabama, under the sponsorship of the Microgravity Science and Applications Division (MSAD) at NASA Headquarters, is continuing a series of space experiments in protein crystal growth which could lead to innovative new drugs as well as basic science data on protein molecular structures. From 1985 through 1992, Protein Crystal Growth (PCG) experiments will have been flown on the Space Shuttle a total of 14 times. The first four hand-held experiments were used to test hardware concepts; later flights incorporated these concepts for vapor diffusion protein crystal growth with temperature control. This article provides an overview of the PCG program: its evolution, objectives, and plans for future experiments on NASA's Space Shuttle and Space Station Freedom.

  3. The regulatory role of c-MYC on HDAC2 and PcG expression in human multipotent stem cells.

    PubMed

    Bhandari, Dilli Ram; Seo, Kwang-Won; Jung, Ji-Won; Kim, Hyung-Sik; Yang, Se-Ran; Kang, Kyung-Sun

    2011-07-01

    Myelocytomatosis oncogene (c-MYC) is a well-known nuclear oncoprotein having multiple functions in cell proliferation, apoptosis and cellular transformation. Chromosomal modification is also important to the differentiation and growth of stem cells. Histone deacethylase (HDAC) and polycomb group (PcG) family genes are well-known chromosomal modification genes. The aim of this study was to elucidate the role of c-MYC in the expression of chromosomal modification via the HDAC family genes in human mesenchymal stem cells (hMSCs). To achieve this goal, c-MYC expression was modified by gene knockdown and overexpression via lentivirus vector. Using the modified c-MYC expression, our study was focused on cell proliferation, differentiation and cell cycle. Furthermore, the relationship of c-MYC with HDAC2 and PcG genes was also examined. The cell proliferation and differentiation were checked and shown to be dramatically decreased in c-MYC knocked-down human umbilical cord blood-derived MSCs, whereas they were increased in c-MYC overexpressing cells. Similarly, RT-PCR and Western blotting results revealed that HDAC2 expression was decreased in c-MYC knocked-down and increased in c-MYC overexpressing hMSCs. Database indicates presence of c-MYC binding motif in HDAC2 promoter region, which was confirmed by chromatin immunoprecipitation assay. The influence of c-MYC and HDAC2 on PcG expression was confirmed. This might indicate the regulatory role of c-MYC over HDAC2 and PcG genes. c-MYCs' regulatory role over HDAC2 was also confirmed in human adipose tissue-derived MSCs and bone-marrow derived MSCs. From this finding, it can be concluded that c-MYC plays a vital role in cell proliferation and differentiation via chromosomal modification.

  4. The regulatory role of c-MYC on HDAC2 and PcG expression in human multipotent stem cells

    PubMed Central

    Bhandari, Dilli Ram; Seo, Kwang-Won; Jung, Ji-Won; Kim, Hyung-Sik; Yang, Se-Ran; Kang, Kyung-Sun

    2011-01-01

    Abstract Myelocytomatosis oncogene (c-MYC) is a well-known nuclear oncoprotein having multiple functions in cell proliferation, apoptosis and cellular transformation. Chromosomal modification is also important to the differentiation and growth of stem cells. Histone deacethylase (HDAC) and polycomb group (PcG) family genes are well-known chromosomal modification genes. The aim of this study was to elucidate the role of c-MYC in the expression of chromosomal modification via the HDAC family genes in human mesenchymal stem cells (hMSCs). To achieve this goal, c-MYC expression was modified by gene knockdown and overexpression via lentivirus vector. Using the modified c-MYC expression, our study was focused on cell proliferation, differentiation and cell cycle. Furthermore, the relationship of c-MYC with HDAC2 and PcG genes was also examined. The cell proliferation and differentiation were checked and shown to be dramatically decreased in c-MYC knocked-down human umbilical cord blood-derived MSCs, whereas they were increased in c-MYC overexpressing cells. Similarly, RT-PCR and Western blotting results revealed that HDAC2 expression was decreased in c-MYC knocked-down and increased in c-MYC overexpressing hMSCs. Database indicates presence of c-MYC binding motif in HDAC2 promoter region, which was confirmed by chromatin immunoprecipitation assay. The influence of c-MYC and HDAC2 on PcG expression was confirmed. This might indicate the regulatory role of c-MYC over HDAC2 and PcG genes. c-MYCs’ regulatory role over HDAC2 was also confirmed in human adipose tissue-derived MSCs and bone-marrow derived MSCs. From this finding, it can be concluded that c-MYC plays a vital role in cell proliferation and differentiation via chromosomal modification. PMID:20716118

  5. Evidence for a precession of the nucleus of comet 67P/C-G from ROSETTA/OSIRIS images

    NASA Astrophysics Data System (ADS)

    Jorda, Laurent; Gutierrez, Pedro; Davidsson, Bjoern; Gaskell, Robert; Hviid, Stubbe; Keller, Horst Uwe; Maquet, Lucie; Mottola, Stefano; Preusker, Frank; Scholten, Frank

    2015-11-01

    The retrieval of the rotational parameters of comet 67P/C-G is part of the shape reconstruction process conducted from data collected by the OSIRIS imaging system aboard ROSETTA. Among other parameters, this includes the reconstruction of the (RA,Dec) direction of the Z axis of the body-fixed frame and that of the angular momentum vector. The stereophotogrammetric solution (Preusker et al., A&A 2015, in press) obtained in Aug-Sep 2014 already showed evidence for a complex rotation of comet 67P/C-G. A subsequent analysis of the rotational data obtained using the stereophotoclinometry method (Gaskell et al., MP&S 43, 1049, 2008) up to April 2015 also revealed a precession with a likelihood greater than 99.99 %. The amplitude and period of the (RA,Dec) variations measured with both methods are fully compatible.We propose an interpretation of the measured period as a combination of torque free motions: a rotation combined with a precession of small amplitude. The modeling of this motion has implications on the value of the moments of inertia, from which it is possible to constrain the internal density distribution of comet 67P/C-G.

  6. Identification and Characterization of the Conjugal Transfer Region of the pCg1 plasmid from Naphthalene-Degrading Pseudomonas putida Cg1

    PubMed Central

    Park, Woojun; Jeon, Che Ok; Hohnstock-Ashe, Amy M.; Winans, Stephen C.; Zylstra, Gerben J.; Madsen, Eugene L.

    2003-01-01

    Hybridization and restriction fragment length polymorphism data (K. G. Stuart-Keil, A. M. Hohnstock, K. P. Drees, J. B. Herrick, and E. L. Madsen, Appl. Environ. Microbiol. 64:3633-3640, 1998) have shown that pCg1, a naphthalene catabolic plasmid carried by Pseudomonas putida Cg1, is homologous to the archetypal naphthalene catabolic plasmid, pDTG1, in P. putida NCIB 9816-4. Sequencing of the latter plasmid allowed PCR primers to be designed for amplifying and sequencing the conjugal transfer region in pCg1. The mating pair formation (mpf) gene, mpfA encoding the putative precursor of the conjugative pilin subunit from pCg1, was identified along with other trb-like mpf genes. Sequence comparison revealed that the 10 mpf genes in pCg1 and pDTG1 are closely related (61 to 84% identity) in sequence and operon structure to the putative mpf genes of catabolic plasmid pWW0 (TOL plasmid of P. putida) and pM3 (antibiotic resistance plasmid of Pseudomonas. spp). A polar mutation caused by insertional inactivation in mpfA of pCg1 and reverse transcriptase PCR analysis of mRNA showed that this mpf region was involved in conjugation and was transcribed from a promoter located upstream of an open reading frame adjacent to mpfA. lacZ transcriptional fusions revealed that mpf genes of pCg1 were expressed constitutively both in liquid and on solid media. This expression did not respond to host exposure to naphthalene. Conjugation frequency on semisolid media was consistently 10- to 100-fold higher than that in liquid media. Thus, conjugation of pCg1 in P. putida Cg1 was enhanced by expression of genes in the mpf region and by surfaces where conditions fostering stable, high-density cell-to-cell contact are manifest. PMID:12788725

  7. Long-range repression by multiple polycomb group (PcG) proteins targeted by fusion to a defined DNA-binding domain in Drosophila.

    PubMed Central

    Roseman, R R; Morgan, K; Mallin, D R; Roberson, R; Parnell, T J; Bornemann, D J; Simon, J A; Geyer, P K

    2001-01-01

    A tethering assay was developed to study the effects of Polycomb group (PcG) proteins on gene expression in vivo. This system employed the Su(Hw) DNA-binding domain (ZnF) to direct PcG proteins to transposons that carried the white and yellow reporter genes. These reporters constituted naive sensors of PcG effects, as bona fide PcG response elements (PREs) were absent from the constructs. To assess the effects of different genomic environments, reporter transposons integrated at nearly 40 chromosomal sites were analyzed. Three PcG fusion proteins, ZnF-PC, ZnF-SCM, and ZnF-ESC, were studied, since biochemical analyses place these PcG proteins in distinct complexes. Tethered ZnF-PcG proteins repressed white and yellow expression at the majority of sites tested, with each fusion protein displaying a characteristic degree of silencing. Repression by ZnF-PC was stronger than ZnF-SCM, which was stronger than ZnF-ESC, as judged by the percentage of insertion lines affected and the magnitude of the conferred repression. ZnF-PcG repression was more effective at centric and telomeric reporter insertion sites, as compared to euchromatic sites. ZnF-PcG proteins tethered as far as 3.0 kb away from the target promoter produced silencing, indicating that these effects were long range. Repression by ZnF-SCM required a protein interaction domain, the SPM domain, which suggests that this domain is not primarily used to direct SCM to chromosomal loci. This targeting system is useful for studying protein domains and mechanisms involved in PcG repression in vivo. PMID:11333237

  8. Scoring Package

    National Institute of Standards and Technology Data Gateway

    NIST Scoring Package (PC database for purchase)   The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.

  9. Block-bordered diagonalization and parallel iterative solvers

    SciTech Connect

    Alvarado, F.; Dag, H.; Bruggencate, M. ten

    1994-12-31

    One of the most common techniques for enhancing parallelism in direct sparse matrix methods is the reorganization of a matrix into a blocked-bordered structure. Incomplete LDU factorization is a very good preconditioner for PCG in serial environments. However, the inherent sequential nature of the preconditioning step makes it less desirable in parallel environments. This paper explores the use of BBD (Blocked Bordered Diagonalization) in connection with ILU preconditioners. The paper shows that BBD-based ILU preconditioners are quite amenable to parallel processing. Neglecting entries from the entire border can result in a blocked diagonal matrix. The result is a great increase in parallelism at the expense of additional iterations. Experiments on the Sequent Symmetry shared memory machine using (mostly) power system that matrices indicate that the method is generally better than conventional ILU preconditioners and in many cases even better than partitioned inverse preconditioners, without the initial setup disadvantages of partitioned inverse preconditioners.

  10. Post Rendez-vous Dielectric 3D Modelling of Comet 67P/CG Using the Rosetta's CONSERT and VIRTIS Instruments Observations

    NASA Astrophysics Data System (ADS)

    Heggy, E.; Palmer, E. M.; Kofman, W. W.; Capria, M. T.; Tosi, F.; Scabbia, G.

    2016-12-01

    We established in Heggy et al., 2012, two hypothetical dielectric models representing two geophysical hypotheses using the Lamy et al. 1998 shape model where established prior to the Rosetta's Rendez-Vous with Comet 67P/CG in order to support the interpretation of radar data from the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) in terms of the potential three-dimensional distribution of porosity, dust-to-ice ratio and temperature in the cometary nucleus. These dielectric models were developed using a combination of dielectric laboratory measurements of cometary analog material—chondritic meteorites—under different porosities, temperatures and ratios of dust to ice; and incorporated observations of other comets such as Tempel 1 to constrain the geophysical parameters upon which the dielectric constant depends. Now in the post Rendez-Vous phase we present updated dielectric models of Comet 67P using the latest shape model from Rosetta's observations of the nucleus, and incorporate measurements of the comet's dielectric properties using radar observations by Arecibo and CONSERT of the upper regolith and bulk nucleus "head" respectively. We also update our dielectric measurements with carbonated meteorites and carbon rich porous materials that simulate the actual composition of 67P as derived from Rosetta. In parallel, we constrain the variability in surface dielectric property using the density and temperature values derived from the thermal observations of the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) aboard Rosetta. Our preliminary results suggest that surface dielectric properties of the upper regolith of 67P have a higher value then the inner icy material constituting the comet head. In particular, we observe a more important variability of surface dielectric properties with real part values ranging from 1.8 to 3. We also quantify the variability of these values as function of the sun illumination angle and distance

  11. How primordial is the structure of comet 67P/C-G (and of comets in general)?

    NASA Astrophysics Data System (ADS)

    Morbidelli, Alessandro; Jutzi, Martin; Benz, Willy; Toliou, Anastasia; Rickman, Hans; Bottke, William; Brasser, Ramon

    2016-10-01

    Several properties of the comet 67P-CG suggest that it is a primordial planetesimal. On the other hand, the size-frequency distribution (SFD) of the craters detected by the New Horizons missions at the surface of Pluto and Charon reveal that the SFD of trans-Neptunian objects smaller than 100km in diameter is very similar to that of the asteroid belt. Because the asteroid belt SFD is at collisional equilibrium, this observation suggests that the SFD of the trans-Neptunian population is at collisional equilibrium as well, implying that comet-size bodies should be the product of collisional fragmentation and not primordial objects. To test whether comet 67P-CG could be a (possibly lucky) survivor of the original population, we conducted a series of numerical impact experiments, where an object with the shape and the density of 67P-CG, and material strength varying from 10 to 1,000 Pa, is hit on the "head" by a 100m projectile at different speeds. From these experiments we derive the impact energy required to disrupt the body catastrophically, or destroy its bi-lobed shape, as a function of impact speed. Next, we consider a dynamical model where the original trans-Neptunian disk is dispersed during a phase of temporary dynamical instability of the giant planets, which successfully reproduces the scattered disk and Oort cloud populations inferred from the current fluxes of Jupiter-family and long period comets. We find that, if the dynamical dispersal of the disk occurs late, as in the Late Heavy Bombardment hypothesis, a 67P-CG-like body has a negligible probability to avoid all catastrophic collisions. During this phase, however, the collisional equilibrium SFD measured by the New Horizons mission can be established. Instead, if the dispersal of the disk occurred as soon as gas was removed, a 67P-CG-like body has about a 20% chance to avoid catastrophic collisions. Nevertheless it would still undergo 10s of reshaping collisions. We estimate that, statistically, the

  12. Monitoring Comet 67P/C-G Micrometer Dust Flux: GIADA onboard Rosetta.

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Sordini, Roberto; Lucarelli, Francesca; Zakharov, Vladimir; Fulle, Marco; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    (21)ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna The MicroBalance System (MBS) is one of the three measurement subsystems of GIADA, the Grain Impact Analyzer and Dust Accumulator on board the Rosetta/ESA spacecraft (S/C). It consists of five Quartz Crystal Microbalances (QCMs) in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. The MBS is continuously monitoring comet 67P/CG since the beginning of May 2014. During the first 4 months of measurements, before the insertion of the S/C in the bound orbit phase, there were no evidences of dust accumulation on the QCMs. Starting from the beginning of October, three out of five QCMs measured an increase of the deposited dust. The measured fluxes show, as expected, a strong anisotropy. In particular, the dust flux appears to be much higher from the Sun direction with respect to the comet direction. Acknowledgment: GIADA was built by a consortum led by the Univ. Napoli "Parthenope" & INAF- Oss. Astr. Capodimonte, in collaboration with the Inst. de Astrofisica de Andalucia, Selex-ES, FI and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with the support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developed from a PI proposal from the University of Kent; sci. & tech. contribution were provided by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project/ESTEC for their out-standing work. Science support provided was by NASA through the US Rosetta Project managed by the Jet Propulsion Laboratory/ California Institute of Technology. GIADA calibrated data will be available through ESA's PSA web site (www.rssd.esa.int/index.php? project=PSA&page=in dex). We would like to thank Angioletta

  13. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  14. Enhanced Growth of Endothelial Precursor Cells on PCG-Matrix Facilitates Accelerated, Fibrosis-Free, Wound Healing: A Diabetic Mouse Model

    PubMed Central

    Kanitkar, Meghana; Jaiswal, Amit; Deshpande, Rucha; Bellare, Jayesh; Kale, Vaijayanti P.

    2013-01-01

    Diabetes mellitus (DM)-induced endothelial progenitor cell (EPC) dysfunction causes impaired wound healing, which can be rescued by delivery of large numbers of ‘normal’ EPCs onto such wounds. The principal challenges herein are (a) the high number of EPCs required and (b) their sustained delivery onto the wounds. Most of the currently available scaffolds either serve as passive devices for cellular delivery or allow adherence and proliferation, but not both. This clearly indicates that matrices possessing both attributes are ‘the need of the day’ for efficient healing of diabetic wounds. Therefore, we developed a system that not only allows selective enrichment and expansion of EPCs, but also efficiently delivers them onto the wounds. Murine bone marrow-derived mononuclear cells (MNCs) were seeded onto a PolyCaprolactone-Gelatin (PCG) nano-fiber matrix that offers a combined advantage of strength, biocompatibility wettability; and cultured them in EGM2 to allow EPC growth. The efficacy of the PCG matrix in supporting the EPC growth and delivery was assessed by various in vitro parameters. Its efficacy in diabetic wound healing was assessed by a topical application of the PCG-EPCs onto diabetic wounds. The PCG matrix promoted a high-level attachment of EPCs and enhanced their growth, colony formation, and proliferation without compromising their viability as compared to Poly L-lactic acid (PLLA) and Vitronectin (VN), the matrix and non-matrix controls respectively. The PCG-matrix also allowed a sustained chemotactic migration of EPCs in vitro. The matrix-effected sustained delivery of EPCs onto the diabetic wounds resulted in an enhanced fibrosis-free wound healing as compared to the controls. Our data, thus, highlight the novel therapeutic potential of PCG-EPCs as a combined ‘growth and delivery system’ to achieve an accelerated fibrosis-free healing of dermal lesions, including diabetic wounds. PMID:23922871

  15. Enhanced growth of endothelial precursor cells on PCG-matrix facilitates accelerated, fibrosis-free, wound healing: a diabetic mouse model.

    PubMed

    Kanitkar, Meghana; Jaiswal, Amit; Deshpande, Rucha; Bellare, Jayesh; Kale, Vaijayanti P

    2013-01-01

    Diabetes mellitus (DM)-induced endothelial progenitor cell (EPC) dysfunction causes impaired wound healing, which can be rescued by delivery of large numbers of 'normal' EPCs onto such wounds. The principal challenges herein are (a) the high number of EPCs required and (b) their sustained delivery onto the wounds. Most of the currently available scaffolds either serve as passive devices for cellular delivery or allow adherence and proliferation, but not both. This clearly indicates that matrices possessing both attributes are 'the need of the day' for efficient healing of diabetic wounds. Therefore, we developed a system that not only allows selective enrichment and expansion of EPCs, but also efficiently delivers them onto the wounds. Murine bone marrow-derived mononuclear cells (MNCs) were seeded onto a PolyCaprolactone-Gelatin (PCG) nano-fiber matrix that offers a combined advantage of strength, biocompatibility wettability; and cultured them in EGM2 to allow EPC growth. The efficacy of the PCG matrix in supporting the EPC growth and delivery was assessed by various in vitro parameters. Its efficacy in diabetic wound healing was assessed by a topical application of the PCG-EPCs onto diabetic wounds. The PCG matrix promoted a high-level attachment of EPCs and enhanced their growth, colony formation, and proliferation without compromising their viability as compared to Poly L-lactic acid (PLLA) and Vitronectin (VN), the matrix and non-matrix controls respectively. The PCG-matrix also allowed a sustained chemotactic migration of EPCs in vitro. The matrix-effected sustained delivery of EPCs onto the diabetic wounds resulted in an enhanced fibrosis-free wound healing as compared to the controls. Our data, thus, highlight the novel therapeutic potential of PCG-EPCs as a combined 'growth and delivery system' to achieve an accelerated fibrosis-free healing of dermal lesions, including diabetic wounds.

  16. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  17. The impact of Polycomb group (PcG) and Trithorax group (TrxG) epigenetic factors in plant plasticity.

    PubMed

    de la Paz Sanchez, Maria; Aceves-García, Pamela; Petrone, Emilio; Steckenborn, Stefan; Vega-León, Rosario; Álvarez-Buylla, Elena R; Garay-Arroyo, Adriana; García-Ponce, Berenice

    2015-11-01

    Current advances indicate that epigenetic mechanisms play important roles in the regulatory networks involved in plant developmental responses to environmental conditions. Hence, understanding the role of such components becomes crucial to understanding the mechanisms underlying the plasticity and variability of plant traits, and thus the ecology and evolution of plant development. We now know that important components of phenotypic variation may result from heritable and reversible epigenetic mechanisms without genetic alterations. The epigenetic factors Polycomb group (PcG) and Trithorax group (TrxG) are involved in developmental processes that respond to environmental signals, playing important roles in plant plasticity. In this review, we discuss current knowledge of TrxG and PcG functions in different developmental processes in response to internal and environmental cues and we also integrate the emerging evidence concerning their function in plant plasticity. Many such plastic responses rely on meristematic cell behavior, including stem cell niche maintenance, cellular reprogramming, flowering and dormancy as well as stress memory. This information will help to determine how to integrate the role of epigenetic regulation into models of gene regulatory networks, which have mostly included transcriptional interactions underlying various aspects of plant development and its plastic response to environmental conditions.

  18. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  19. The Kull IMC package

    SciTech Connect

    Gentile, N A; Keen,N; Rathkopf, J

    1998-10-01

    We describe the Kull IMC package, and Implicit Monte Carlo Program written for use in A and X division radiation hydro codes. The Kull IMC has been extensively tested. Written in C++ and using genericity via the template feature to allow easy integration into different codes, the Kull IMC currently runs coupled radiation hydrodynamic problems in 2 different 3D codes. A stand-alone version also exists, which has been parallelized with mesh replication. This version has been run on up to 384 processors on ASCI Blue Pacific.

  20. Water and Carbon Dioxide Ices-Rich Areas on Comet 67P/CG Nucleus Surface

    NASA Astrophysics Data System (ADS)

    Filacchione, G.; Capaccioni, F.; Raponi, A.; De Sanctis, M. C.; Ciarniello, M.; Barucci, M. A.; Tosi, F.; Migliorini, A.; Capria, M. T.; Erard, S.; Bockelée-Morvan, D.; Leyrat, C.; Arnold, G.; Kappel, D.; McCord, T. B.

    2017-01-01

    fields ice grains [3]; 3) different combinations of water ice and dark terrain in intimate mixing with small grains (tens of microns) or in areal mixing with large grains (mm- sized) are seen on the eight bright areas discussed in [4]; 4) the CO2 ice in the Anhur region appears grouped in areal patches made of 50 μm sized grains [5]. While the spectroscopic identification of water and carbon dioxide ices is made by means of diagnostic infrared absorption features, their presence cause significant effects also at visible wavelengths, including the increase of the albedo and the reduction of the spectral slope which results in a more blue color [9,10]. In summary, thermodynamic conditions prevailing on the 67P/CG nucleus surface allow the presence of only H2O and CO2 ices. Similar properties are probably common among other Jupiter family comets.

  1. GIADA on-board Rosetta: comet 67P/C-G dust coma characterization

    NASA Astrophysics Data System (ADS)

    Rotundi, Alessandra; Della Corte, Vincenzo; Fulle, Marco; Sordini, Roberto; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Lucarelli, Francesca; Zakharov, Vladimir; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    21ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna GIADA consists of three subsystems: 1) the Grain Detection System (GDS) to detect dust grains as they pass through a laser curtain, 2) the Impact Sensor (IS) to measure grain momentum derived from the impact on a plate connected to five piezoelectric sensors, and 3) the MicroBalances System (MBS); five quartz crystal microbalances in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. GDS provides data on grain speed and its optical cross section. The IS grain momentum measurement, when combined with the GDS detection time, provides a direct measurement of grain speed and mass. These combined measurements characterize single grain dust dynamics in the coma of 67P/CG. No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such high heliocentric distances are available up to date. We present here the results obtained by GIADA, which began operating in continuous mode on 18 July 2014 when the comet was at a heliocentric distance of 3.7 AU. The first grain detection occurred when the spacecraft was 814 km from the nucleus on 1 August 2014. From August the 1st up to December the 11th, GIADA detected more than 800 grains, for which the 3D spatial distribution was determined. About 700 out of 800 are GDS only detections: "dust clouds", i.e. slow dust grains (≈ 0.5 m/s) crossing the laser curtain very close in time (e.g. 129 grains in 11 s), probably fluffy grains. IS only detections are about 70, i.e. ≈ 1/10 of the GDS only. This ratio is quite different from what we got for the early detections (August - September) when the ration was ≈ 3, suggesting the presence of different types of particle (bigger, brighter, less dense).The combined GDS+IS detections, i.e. measured by both the GDS and IS detectors, are about 70 and allowed us to extract the

  2. Where do we expect activity on 67P/CG ? - A thermo-physical point of view

    NASA Astrophysics Data System (ADS)

    Höfner, Sebastian; Vincent, Jean-Baptiste; Sierks, Holger; Jorda, Laurent; Blum, Jürgen

    2015-04-01

    + the OSIRIS Team OSIRIS, Rosetta's scientific camera system, has been mapping the nucleus surface of 67P/CG for several months by now. Images clearly show faint structures in the inner coma that can be linked to dust being lifted off the surface. These jet-like structures appear to be associated with local areas on the nucleus in most cases and show diurnal variations and confined dimensions. The main driver of this activity is sublimation of ices due to absorbed solar irradiation. The complex nucleus surface structure of 67P/CG, in combination with its orbital eccentricity and the rotational axis obliquity, creates a wide range of illumination conditions. The bumpy and fissured, partly fractured morphology leads to a significant influence of shadowing and indirect heating through infrared radiation. In interaction with the variations of fine and dusty material of the cometary crust, the thermal behavior of the local nucleus surface shows unique temporal patterns. 3 D shape models of different surface resolution scales allow quasi-3-D thermo-physical modeling. These thermal models combine orbital and rotational elements with shape and morphologic information and discrete resolution of the cometary nucleus in depth. We can therefore distinguish thermal waves from diurnal to orbital scales, and effects that are due to evolution of the cometary crust. We look at the temperature pattern at several regions of 67P/CG in low and high spatial resolution. We are especially interested to identify areas that reach temperatures sufficiently high to trigger considerable sublimation effects. In our analysis, we show correlations of active regions in the recent months of observation to our model, as well as perspectives for the approaching perihelion passage. We suggest an answer to the problem, which areas are expected to have high potential for activity. By comparing simulated areas to active regions on the nucleus, we contribute to a better understanding of the ruling

  3. Structural basis of DNA recognition by PCG2 reveals a novel DNA binding mode for winged helix-turn-helix domains

    PubMed Central

    Liu, Junfeng; Huang, Jinguang; Zhao, Yanxiang; Liu, Huaian; Wang, Dawei; Yang, Jun; Zhao, Wensheng; Taylor, Ian A.; Peng, You-Liang

    2015-01-01

    The MBP1 family proteins are the DNA binding subunits of MBF cell-cycle transcription factor complexes and contain an N terminal winged helix-turn-helix (wHTH) DNA binding domain (DBD). Although the DNA binding mechanism of MBP1 from Saccharomyces cerevisiae has been extensively studied, the structural framework and the DNA binding mode of other MBP1 family proteins remains to be disclosed. Here, we determined the crystal structure of the DBD of PCG2, the Magnaporthe oryzae orthologue of MBP1, bound to MCB–DNA. The structure revealed that the wing, the 20-loop, helix A and helix B in PCG2–DBD are important elements for DNA binding. Unlike previously characterized wHTH proteins, PCG2–DBD utilizes the wing and helix-B to bind the minor groove and the major groove of the MCB–DNA whilst the 20-loop and helix A interact non-specifically with DNA. Notably, two glutamines Q89 and Q82 within the wing were found to recognize the MCB core CGCG sequence through making hydrogen bond interactions. Further in vitro assays confirmed essential roles of Q89 and Q82 in the DNA binding. These data together indicate that the MBP1 homologue PCG2 employs an unusual mode of binding to target DNA and demonstrate the versatility of wHTH domains. PMID:25550425

  4. TrxG and PcG proteins but not methylated histones remain associated with DNA through replication

    PubMed Central

    Petruk, Svetlana; Sedkov, Yurii; Johnston, Danika M.; Hodgson, Jacob W.; Black, Kathryn L.; Kovermann, Sina K.; Beck, Samantha; Canaani, Eli; Brock, Hugh W.; Mazo, Alexander

    2012-01-01

    Summary Propagation of gene expression patterns through the cell cycle requires the existence of an epigenetic mark that re-establishes the chromatin architecture of the parental cell in the daughter cells. We devised assays to determine which potential epigenetic marks associate with epigenetic maintenance elements during DNA replication in Drosophila embryos. Histone H3 trimethylated at lysine 4 or 27 are present during transcription, but surprisingly are replaced by non-methylated H3 following DNA replication. Methylated H3 is detected on DNA only in nuclei not in S phase. In contrast, the TrxG and PcG proteins Trithorax and Enhancer-of-Zeste that are H3K4 and H3K27 methylases, and Polycomb continuously associate with their response elements on the newly replicated DNA. We suggest that histone modification enzymes may re-establish the histone code on newly assembled unmethylated histones, and thus may act as epigenetic marks. PMID:22921915

  5. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2009-05-27

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-Gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing Shielded Container Payload Assembly; 1.7, Preparing SWB Payload Assembly; and 1.8, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence, except as noted.

  6. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2008-09-11

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing SWB Payload Assembly; and 1.7, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence.

  7. Seafood Packaging

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with a New Orleans seafood packaging company to develop a container to improve the shipping longevity of seafood, primarily frozen and fresh fish, while preserving the taste. A NASA engineer developed metalized heat resistant polybags with thermal foam liners using an enhanced version of the metalized mylar commonly known as 'space blanket material,' which was produced during the Apollo era.

  8. Seafood Packaging

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with a New Orleans seafood packaging company to develop a container to improve the shipping longevity of seafood, primarily frozen and fresh fish, while preserving the taste. A NASA engineer developed metalized heat resistant polybags with thermal foam liners using an enhanced version of the metalized mylar commonly known as 'space blanket material,' which was produced during the Apollo era.

  9. Packaged Food

    NASA Technical Reports Server (NTRS)

    1976-01-01

    After studies found that many elderly persons don't eat adequately because they can't afford to, they have limited mobility, or they just don't bother, Innovated Foods, Inc. and JSC developed shelf-stable foods processed and packaged for home preparation with minimum effort. Various food-processing techniques and delivery systems are under study and freeze dried foods originally used for space flight are being marketed. (See 77N76140)

  10. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  11. Parallel computation

    NASA Astrophysics Data System (ADS)

    Huberman, Bernardo A.

    1989-11-01

    This paper reviews three different aspects of parallel computation which are useful for physics. The first part deals with special architectures for parallel computing (SIMD and MIMD machines) and their differences, with examples of their uses. The second section discusses the speedup that can be achieved in parallel computation and the constraints generated by the issues of communication and synchrony. The third part describes computation by distributed networks of powerful workstations without global controls and the issues involved in understanding their behavior.

  12. Reflective Packaging

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The aluminized polymer film used in spacecraft as a radiation barrier to protect both astronauts and delicate instruments has led to a number of spinoff applications. Among them are aluminized shipping bags, food cart covers and medical bags. Radiant Technologies purchases component materials and assembles a barrier made of layers of aluminized foil. The packaging reflects outside heat away from the product inside the container. The company is developing new aluminized lines, express mailers, large shipping bags, gel packs and insulated panels for the building industry.

  13. Challenges in the Packaging of MEMS

    SciTech Connect

    Malshe, A.P.; Singh, S.B.; Eaton, W.P.; O'Neal, C.; Brown, W.D.; Miller, W.M.

    1999-03-26

    The packaging of Micro-Electro-Mechanical Systems (MEMS) is a field of great importance to anyone using or manufacturing sensors, consumer products, or military applications. Currently much work has been done in the design and fabrication of MEMS devices but insufficient research and few publications have been completed on the packaging of these devices. This is despite the fact that packaging is a very large percentage of the total cost of MEMS devices. The main difference between IC packaging and MEMS packaging is that MEMS packaging is almost always application specific and greatly affected by its environment and packaging techniques such as die handling, die attach processes, and lid sealing. Many of these aspects are directly related to the materials used in the packaging processes. MEMS devices that are functional in wafer form can be rendered inoperable after packaging. MEMS dies must be handled only from the chip sides so features on the top surface are not damaged. This eliminates most current die pick-and-place fixtures. Die attach materials are key to MEMS packaging. Using hard die attach solders can create high stresses in the MEMS devices, which can affect their operation greatly. Low-stress epoxies can be high-outgassing, which can also affect device performance. Also, a low modulus die attach can allow the die to move during ultrasonic wirebonding resulting to low wirebond strength. Another source of residual stress is the lid sealing process. Most MEMS based sensors and devices require a hermetically sealed package. This can be done by parallel seam welding the package lid, but at the cost of further induced stress on the die. Another issue of MEMS packaging is the media compatibility of the packaged device. MEMS unlike ICS often interface with their environment, which could be high pressure or corrosive. The main conclusion we can draw about MEMS packaging is that the package affects the performance and reliability of the MEMS devices. There is a

  14. Dust Impact Monitor DIM Onboard Philae: Measurements at Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Krüger, Harald; Albin, Thomas; Apathy, Istvan; Arnold, Walter; Flandes, Alberto; Fischer, Hans-Herbert; Hirn, Attila; Loose, Alexander; Peter, Attila; Seidensticker, Klaus J.; Sperl, Matthias

    2015-04-01

    The Rosetta lander Philae landed successfully on the nucleus surface of comet 67P/Churyumov-Gerasimenko on 12 November 2014. Philae is equipped with the Dust Impact Monitor (DIM) which is part of the SESAME experiment package onboard. DIM employs piezoelectric PZT sensors to detect impacts by sub-millimetre and millimetre-sized ice and dust particles that are emitted from the nucleus and transported into the cometary coma. DIM was operated during Philae's descent to its nominal landing site at 4 different altitudes above the comet surface, and at Philae's final landing site. During descent to the nominal landing site, DIM measured the impact of one rather big particle that probably had a size of a few millimeters. No impacts were detected at the final landing site which may be due to low cometary activity or due to shadowing from obstacles close to Philae, or both. We will present the results from our measurements at the comet and compare them with laboratory calibration experiments with ice/dust particles performed with a DIM flight spare sensor.

  15. Monitoring 67P/C-G coma dust environment from 3.6 AU in-bound to the Sun to 2 AU out-bound

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Fulle, Marco

    2016-04-01

    GIADA, on board the Rosetta/ESA space mission is an instrument devoted to monitor the dynamical and physical properties of the dust particles emitted by comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) along its orbit, from 3.6 AU in-bound to the Sun to 2 AU out-bound. Since the 17th of July 2014 GIADA is fully operative and was able to measure the speed and mass of individual dust particles. GIADA capability of detecting dust particles with an high time resolution and the accurate characterization of the physical properties of each detected particle allowed the identification of two different families of dust particles emitted by 67P/C-G nucleus: compact particles with densities varying from about 100 kg/m3 to 3000 kg/m3 and the fluffy particles with densities down to 1kg/m^3. GIADA continuous monitoring of the coma dust environment of comet 67P/C-G along its orbit, accounted for the different geometry of the observation along Rosetta trajectories, enabled us to: 1) investigate how the dust fluxes for each particle family evolves with respect to the heliocentric distance; 2) identify the nucleus/coma regions with high dust emission/density; 3) observe the changes that this regions undergo along the comet orbit; 4) measure and monitor the dust production rate; and, 5) evaluate the 67P/C-G dust to gas ratio by coupling GIADA measurements with the results of the Rosetta instruments devoted to gas measurements (MIRO and ROSINA).

  16. Packaging Your Training Materials

    ERIC Educational Resources Information Center

    Espeland, Pamela

    1977-01-01

    The types of packaging and packaging materials to use for training materials should be determined during the planning of the training programs, according to the packaging market. Five steps to follow in shopping for packaging are presented, along with a list of packaging manufacturers. (MF)

  17. Packaging Your Training Materials

    ERIC Educational Resources Information Center

    Espeland, Pamela

    1977-01-01

    The types of packaging and packaging materials to use for training materials should be determined during the planning of the training programs, according to the packaging market. Five steps to follow in shopping for packaging are presented, along with a list of packaging manufacturers. (MF)

  18. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  19. Application of Russian Thermo-Electric Devices (TEDS) for the US Microgravity Program Protein Crystal Growth (PCG) Project

    NASA Technical Reports Server (NTRS)

    Aksamentov, Valery

    1996-01-01

    Changes in the former Soviet Union have opened the gate for the exchange of new technology. Interest in this work has been particularly related to Thermal Electric Cooling Devices (TED's) which have an application for the Thermal Enclosure System (TES) developed by NASA. Preliminary information received by NASA/MSFC indicates that Russian TED's have higher efficiency. Based on that assumption NASA/MSFC awarded a contract to the University of Alabama in Huntsville (UAH) in order to study the Russian TED's technology. In order to fulfill this a few steps should be made: (1) potential specifications and configurations should be defined for use of TED's in Protein Crystal Growing (PCG) thermal control hardware; and (2) work closely with the identified Russian source to define and identify potential Russian TED's to exceed the performance of available domestic TED's. Based on the data from Russia, it is possible to make plans for further steps such as buying and testing high performance TED's. To accomplish this goal two subcontracts have been released. One subcontract to Automated Sciences Group (ASG) located in Huntsville, AL and one to the International Center for Advanced Studies 'Cosmos' located in Moscow, Russia.

  20. Science packages

    NASA Astrophysics Data System (ADS)

    1997-01-01

    Primary science teachers in Scotland have a new updating method at their disposal with the launch of a package of CDi (Compact Discs Interactive) materials developed by the BBC and the Scottish Office. These were a response to the claim that many primary teachers felt they had been inadequately trained in science and lacked the confidence to teach it properly. Consequently they felt the need for more in-service training to equip them with the personal understanding required. The pack contains five disks and a printed user's guide divided up as follows: disk 1 Investigations; disk 2 Developing understanding; disks 3,4,5 Primary Science staff development videos. It was produced by the Scottish Interactive Technology Centre (Moray House Institute) and is available from BBC Education at £149.99 including VAT. Free Internet distribution of science education materials has also begun as part of the Global Schoolhouse (GSH) scheme. The US National Science Teachers' Association (NSTA) and Microsoft Corporation are making available field-tested comprehensive curriculum material including 'Micro-units' on more than 80 topics in biology, chemistry, earth and space science and physics. The latter are the work of the Scope, Sequence and Coordination of High School Science project, which can be found at http://www.gsh.org/NSTA_SSandC/. More information on NSTA can be obtained from its Web site at http://www.nsta.org.

  1. Microelectronic packaging

    NASA Astrophysics Data System (ADS)

    Blodgett, A. J., Jr.

    1983-07-01

    Microelectronic packaging design problems for high-speed digital computers are discussed. The dense packing requirements of the task necessitate account be taken of proper cooling for the chips, minimization of distortion of signals, and efficient placement of the chips and terminals. The increase in the numbers of circuits in a chip has permitted manufacturing of multichip boards and elimination of some previously needed cards in the mainframe hierarchy. Electrical signals travel at about 15 cm/nsec through conductors on a board, a speed affected by the inductance and capacitance of the line, as well as its geometry. Signals in one line need to be prevented from jumping into another line passing close by. Any discontinuities can cause signal reflection, i.e., noise. Array formatting reduces the space necessary for chip connections and mounting. Interconnections between vertically stacked boards, vias, can be made into the grids. Air and water cooling systems are used for cooling the boards to temperatures which allow continued high-speed operation.

  2. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  3. Global and Spatially Resolved Photometric Properties of the Nucleus of Comet 67P/C-G from OSIRIS Images

    NASA Astrophysics Data System (ADS)

    Lamy, P.

    2014-04-01

    Following the successful wake-up of the ROSETTA spacecraft on 20 January 2014, the OSIRIS imaging system was fully re-commissioned at the end of March 2014 confirming its initial excellent performances. The OSIRIS instrument includes two cameras: the Narrow Angle Camera (NAC) and the Wide Angle Camera (WAC) with respective fieldsofview of 2.2° and 12°, both equipped with 2K by 2K CCD detectors and dual filter wheels. The NAC filters allow a spectral coverage of 270 to 990 nm tailored to the investigation of the mineralogical composition of the nucleus of comet P/Churyumov- Gerasimenko whereas those of the WAC (245-632 nm) aim at characterizing its coma [1]. The NAC has already secured a set of four complete light curves of the nucleus of 67P/C-G between 3 March and 24 April 2014 with a primary purpose of characterizing its rotational state. A preliminary spin period of 12.4 hours has been obtained, similar to its very first determination from a light curve obtained in 2003 with the Hubble space telescope [2]. The NAC and WAC will be recalibrated in the forthcoming weeks using the same stellar calibrators VEGA and the solar analog 16 Cyg B as for past inflight calibration campaigns in support of the flybys of asteroids Steins and Lutetia. This will allow comparing the pre- and post-hibernation performances of the cameras and correct the quantum efficiency response of the two CCD and the throughput for all channels (i.e., filters) if required. The accurate photometric analysis of the images requires utmost care due to several instrumental problems, the most severe and complex to handle being the presence of optical ghosts which result from multiple reflections on the two filters inserted in the optical beam and on the thick window which protects the CCD detector from cosmic ray impacts. These ghosts prominently appear as either slightly defocused images offset from the primary images or large round or elliptical halos. We will first present results on the global

  4. Rosetta/VIRTIS-M spectral data: Comet 67P/CG compared to other primitive small bodies.

    NASA Astrophysics Data System (ADS)

    De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Erard, S.; Tosi, F.; Ciarniello, M.; Raponi, A.; Piccioni, G.; Leyrat, C.; Bockelée-Morvan, D.; Drossart, P.; Fornasier, S.

    2014-12-01

    VIRTIS-M, the Visible InfraRed Thermal Imaging Spectrometer, onboard the Rosetta Mission orbiter (Coradini et al., 2007) acquired data of the comet 67P/Churyumov-Gerasimenko in the 0.25-5.1 µm spectral range. The initial data, obtained during the first mission phases to the comet, allow us to derive albedo and global spectral properties of the comet nucleus as well as spectra of different areas on the nucleus. The characterization of cometary nuclei surfaces and their comparison with those of related populations such as extinct comet candidates, Centaurs, near-Earth asteroids (NEAs), trans-Neptunian objects (TNOs), and primitive asteroids is critical to understanding the origin and evolution of small solar system bodies. The acquired VIRTIS data are used to compare the global spectral properties of comet 67P/CG to published spectra of other cometary nuclei observed from ground or visited by space mission. Moreover, the spectra of 67P/Churyumov-Gerasimenko are also compared to those of primitive asteroids and centaurs. The comparison can give us clues on the possible common formation and evolutionary environment for primitive asteroids, centaurs and Jupiter-family comets. Authors acknowledge the funding from Italian and French Space Agencies. References: Coradini, A., Capaccioni, F., Drossart, P., Arnold, G., Ammannito, E., Angrilli, F., Barucci, A., Bellucci, G., Benkhoff, J., Bianchini, G., Bibring, J. P., Blecka, M., Bockelee-Morvan, D., Capria, M. T., Carlson, R., Carsenty, U., Cerroni, P., Colangeli, L., Combes, M., Combi, M., Crovisier, J., De Sanctis, M. C., Encrenaz, E. T., Erard, S., Federico, C., Filacchione, G., Fink, U., Fonti, S., Formisano, V., Ip, W. H., Jaumann, R., Kuehrt, E., Langevin, Y., Magni, G., McCord, T., Mennella, V., Mottola, S., Neukum, G., Palumbo, P., Piccioni, G., Rauer, H., Saggin, B., Schmitt, B., Tiphene, D., Tozzi, G., Space Science Reviews, Volume 128, Issue 1-4, 529-559, 2007.

  5. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  6. A parallel implementation of an EBE solver for the finite element method

    SciTech Connect

    Silva, R.P.; Las Casas, E.B.; Carvalho, M.L.B.

    1994-12-31

    A parallel implementation using PVM on a cluster of workstations of an Element By Element (EBE) solver using the Preconditioned Conjugate Gradient (PCG) method is described, along with an application in the solution of the linear systems generated from finite element analysis of a problem in three dimensional linear elasticity. The PVM (Parallel Virtual Machine) system, developed at the Oak Ridge Laboratory, allows the construction of a parallel MIMD machine by connecting heterogeneous computers linked through a network. In this implementation, version 3.1 of PVM is used, and 11 SLC Sun workstations and a Sun SPARC-2 model are connected through Ethernet. The finite element program is based on SDP, System for Finite Element Based Software Development, developed at the Brazilian National Laboratory for Scientific Computation (LNCC). SDP provides the basic routines for a finite element application program, as well as a standard for programming and documentation, intended to allow exchanges between research groups in different centers.

  7. Reflectance spectroscopy of natural organic solids, iron sulfides and their mixtures as refractory analogues for Rosetta/VIRTIS' surface composition analysis of 67P/CG

    NASA Astrophysics Data System (ADS)

    Moroz, Lyuba V.; Markus, Kathrin; Arnold, Gabriele; Henckel, Daniela; Kappel, David; Schade, Ulrich; Rousseau, Batiste; Quirico, Eric; Schmitt, Bernard; Capaccioni, Fabrizio; Bockelee-Morvan, Dominique; Filacchione, Gianrico; Érard, Stéphane; Leyrat, Cedric; VIRTIS Team

    2016-10-01

    Analysis of 0.25-5 µm reflectance spectra provided by the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) onboard Rosetta orbiter revealed that the surface of 67P/CG is dark from the near-UV to the IR and is enriched in refractory phases such as organic and opaque components. The broadness and complexity of the ubiquitous absorption feature around 3.2 µm suggest a variety of cometary organic constituents. For example, complex hydrocarbons (aliphatic and polycyclic aromatic) can contribute to the feature between 3.3 and 3.5 µm and to the low reflectance of the surface in the visible. Here we present the 0.25-5 µm reflectance spectra of well-characterized terrestrial hydrocarbon materials (solid oil bitumens, coals) and discuss their relevance as spectral analogues for a hydrocarbon part of 67P/CG's complex organics. However, the expected low degree of thermal processing of cometary hydrocarbons (high (H+O+N+S)/C ratios and low carbon aromaticities) suggests high IR reflectance, intense 3.3-3.5 µm absorption bands and steep red IR slopes that are not observed in the VIRTIS spectra. Fine-grained opaque refractory phases (e.g., iron sulfides, Fe-Ni alloys) intimately mixed with other surface components are likely responsible for the low IR reflectance and low intensities of absorption bands in the VIRTIS spectra of the 67P/CG surface. In particular, iron sulfides are common constituents of cometary dust, "cometary" chondritic IDPs, and efficient darkening agents in primitive carbonaceous chondrites. Their effect on reflectance spectra of an intimate mixture is strongly affected by grain size. We report and discuss the 0.25-5 µm reflectance spectra of iron sulfides (meteoritic troilite and several terrestrial pyrrhotites) ground and sieved to various particle sizes. In addition, we present reflectance spectra of several intimate mixtures of powdered iron sulfides and solid oil bitumens. Based on the reported laboratory data, we discuss the ability of

  8. Packaging of MEMS microphones

    NASA Astrophysics Data System (ADS)

    Feiertag, Gregor; Winter, Matthias; Leidl, Anton

    2009-05-01

    To miniaturize MEMS microphones we have developed a microphone package using flip chip technology instead of chip and wire bonding. In this new packaging technology MEMS and ASIC are flip chip bonded on a ceramic substrate. The package is sealed by a laminated polymer foil and by a metal layer. The sound port is on the bottom side in the ceramic substrate. In this paper the packaging technology is explained in detail and results of electro-acoustic characterization and reliability testing are presented. We will also explain the way which has led us from the packaging of Surface Acoustic Wave (SAW) components to the packaging of MEMS microphones.

  9. Packaging for Food Service

    NASA Technical Reports Server (NTRS)

    Stilwell, E. J.

    1985-01-01

    Most of the key areas of concern in packaging the three principle food forms for the space station were covered. It can be generally concluded that there are no significant voids in packaging materials availability or in current packaging technology. However, it must also be concluded that the process by which packaging decisions are made for the space station feeding program will be very synergistic. Packaging selection will depend heavily on the preparation mechanics, the preferred presentation and the achievable disposal systems. It will be important that packaging be considered as an integral part of each decision as these systems are developed.

  10. Packaging for Food Service

    NASA Technical Reports Server (NTRS)

    Stilwell, E. J.

    1985-01-01

    Most of the key areas of concern in packaging the three principle food forms for the space station were covered. It can be generally concluded that there are no significant voids in packaging materials availability or in current packaging technology. However, it must also be concluded that the process by which packaging decisions are made for the space station feeding program will be very synergistic. Packaging selection will depend heavily on the preparation mechanics, the preferred presentation and the achievable disposal systems. It will be important that packaging be considered as an integral part of each decision as these systems are developed.

  11. Insights gained from Data Measured by the CONSERT Instrument during Philae's Descent onto 67P/C-G's surface

    NASA Astrophysics Data System (ADS)

    Plettemeier, Dirk; Statz, Christoph; Abraham, Jens; Ciarletti, Valerie; Hahnel, Ronny; Hegler, Sebastian; Herique, Alain; Pasquero, Pierre; Rogez, Yves; Zine, Sonia; Kofman, Wlodek

    2015-04-01

    The scientific objective of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard ESA spacecraft Rosetta is to perform a dielectric characterization of comet 67P/Chuyurmov-Gerasimenko's nucleus. This is done by means of a bi-static sounding between the lander Philae launched onto the comet's surface and the orbiter Rosetta. For the sounding, the CONSERT unit aboard the lander will receive and process the radio signal emitted by the orbiter counterpart of the instrument. It will then retransmit a signal back to the orbiter to be received by CONSERT. This happens at the milliseconds time scale. During the descent of lander Philae onto the comet's surface, CONSERT was operated as a bi-static RADAR. A single measurement of the obtained data is composed of the dominant signal from the direct line-of-sight propagation path between lander and orbiter as well as paths from the lander's signal being reflected by the comet's surface. From peak power measurements of the dominant direct path during the descent, the knowledge of the orbiter and lander positions and simulations of CONSERT's orbiter and lander antenna characteristics as well as polarization properties, we were able to reconstruct the lander's attitude and estimate the spin rate of the lander along the descent trajectory. Additionally, certain operations and manoeuvres of orbiter and lander, e.g. the deployment of the lander legs and CONSERT antennas or the orbiter change of attitude in order to orient the science towards the assumed lander position, are also visible in the data. The information gained on the landers attitude is used in the reconstruction of the dielectric properties of 67P/C-G's surface and near subsurface (metric to decametric scale) and will hopefully prove helpful supporting the data interpretation of other instruments. In the CONSERT measurements, the comet's surface is visible during roughly the last third of the descent enabling a mean permittivity estimation of

  12. Search for regional variations of thermal and electrical properties of comet 67P/CG probed by MIRO/Rosetta

    NASA Astrophysics Data System (ADS)

    Leyrat, Cedric; Blain, Doriann; Lellouch, Emmanuel; von Allmen, Paul; Keihm, Stephen; Choukroun, Matthieu; Schloerb, Pete; Biver, Nicolas; Gulkis, Samuel; Hofstadter, Mark

    2015-11-01

    Since June 2014, The MIRO (Microwave Instrument for Rosetta Orbiter) on board the Rosetta (ESA) spacecraft observes comet 67P-CG along its heliocentric orbit from 3.25 AU to 1.24 AU. MIRO operates in millimeter and submillimeter wavelengths respectively at 190 GHz (1.56 mm) and 562 GHz (0.5 mm). While the submillimeter channel is coupled to a Chirp Transform Spectrometer (CTS) for spectroscopic analysis of the coma, both bands provide a broad-band continuum channel for sensing the thermal emission of the nucleus itself.Continuum measurements of the nucleus probe the subsurface thermal emission from two different depths. The first analysis (Schloerb et al., 2015) of data already obtained essentially in the Northern hemisphere have revealed large temperature variations with latitude, as well as distinct diurnal curves, most prominent in the 0.5 mm channel, indicating that the electric penetration depth for this channel is comparable to the diurnal thermal skin depth. Initial modelling of these data have indicated a low surface thermal inertia, in the range 10-30 J K-1 m-2 s-1/2 and probed depths of order 1-4 cm. We here investigate potential spatial variations of thermal and electrical properties by analysing separately the geomorphological regions described by Thomas et al. (2015). For each region, we select measurements corresponding to those areas, obtained at different local times and effective latitudes. We model the thermal profiles with depth and the outgoing mm and submm radiation for different values of the thermal inertia and of the ratio of the electrical to the thermal skin depth. We will present the best estimates of thermal inertia and electric/thermal depth ratios for each region selected. Additional information on subsurface temperature gradients may be inferred by using observations at varying emergence angles.The thermal emission from southern regions has been analysed by Choukroun et al (2015) during the polar night. Now that the comet has reached

  13. 67P/CG morphological units and VIS-IR spectral classes: a Rosetta/VIRTIS-M perspective

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; De Sanctis, Maria Cristina; Tosi, Federico; Piccioni, Giuseppe; Cerroni, Priscilla; Capria, Maria Teresa; Palomba, Ernesto; Longobardo, Andrea; Migliorini, Alessandra; Erard, Stephane; Arnold, Gabriele; Bockelee-Morvan, Dominique; Leyrat, Cedric; Schmitt, Bernard; Quirico, Eric; Barucci, Antonella; McCord, Thomas B.; Stephan, Katrin; Kappel, David

    2015-11-01

    VIRTIS-M, the 0.25-5.1 µm imaging spectrometer on Rosetta (Coradini et al., 2007), has mapped the surface of 67P/CG nucleus since July 2014 from a wide range of distances. Spectral analysis of global scale data indicate that the nucleus presents different terrains uniformly covered by a very dark (Ciarniello et al., 2015) and dehydrated organic-rich material (Capaccioni et al., 2015). The morphological units identified so far (Thomas et al., 2015; El-Maarry et al., 2015) include dust-covered brittle materials regions (like Ash, Ma'at), exposed material regions (Seth), large-scale depressions (like Hatmehit, Aten, Nut), smooth terrains units (like Hapi, Anubis, Imhotep) and consolidated surfaces (like Hathor, Anuket, Aker, Apis, Khepry, Bastet, Maftet). For each of these regions average VIRTIS-M spectra were derived with the aim to explore possible connections between morphology and spectral properties. Photometric correction (Ciarniello et al., 2015), thermal emission removal in the 3.5-5 micron range and georeferencing have been applied to I/F data in order to derive spectral indicators, e.g. VIS-IR spectral slopes, their crossing wavelength (CW) and the 3.2 µm organic material band’s depth (BD), suitable to identify and map compositional variations. Our analysis shows that smooth terrains have the lower slopes in VIS (<1.7E-3 1/µm) and IR (0.4E-3 1/µm), CW=0.75 µm and BD=8-12%. Intermediate VIS slope=1.7-1.9E-3 1/µm, and higher BD=10-12.8%, are typical of consolidated surfaces, some dust covered regions and Seth where the maximum BD=13% has been observed. Large-scale depressions and Imhotep are redder with a VIS slope of 1.9-2.1E-3 1/µm, CW at 0.85-0.9 µm and BD=8-11%. The minimum VIS-IR slopes are observed above the Hapi, in agreement with the presence of water ice sublimation and recondensation processes observed by VIRTIS in this region (De Sanctis et al., 2015).Authors acknowledge ASI, CNES, DLR and NASA financial support.References:-Coradini et al

  14. GIADA On-Board Rosetta: Early Dust Grain Detections and Dust Coma Characterization of Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Rotundi, A.; Della Corte, V.; Accolla, M.; Ferrari, M.; Ivanovski, S.; Lucarelli, F.; Mazzotta Epifani, E.; Sordini, R.; Palumbo, P.; Colangeli, L.; Lopez-Moreno, J. J.; Rodriguez, J.; Fulle, M.; Bussoletti, E.; Crifo, J. F.; Esposito, F.; Green, S.; Grün, E.; Lamy, P. L.; McDonnell, T.; Mennella, V.; Molina, A.; Moreno, F.; Ortiz, J. L.; Palomba, E.; Perrin, J. M.; Rodrigo, R.; Weissman, P. R.; Zakharov, V.; Zarnecki, J.

    2014-12-01

    GIADA (Grain Impact Analyzer and Dust Accumulator) flying on-board Rosetta is devoted to study the cometary dust environment of 67P/Churiumov-Gerasimenko. GIADA is composed of 3 sub-systems: the GDS (Grain Detection System), based on grain detection through light scattering; an IS (Impact Sensor), giving momentum measurement detecting the impact on a sensed plate connected with 5 piezoelectric sensors; the MBS (MicroBalances System), constituted of 5 Quartz Crystal Microbalances (QCMs), giving cumulative deposited dust mass by measuring the variations of the sensors' frequency. The combination of the measurements performed by these 3 subsystems provides: the number, the mass, the momentum and the velocity distribution of dust grains emitted from the cometary nucleus.No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such large heliocentric distances are available up to date. We present here the first results obtained from the beginning of the Rosetta scientific phase. We will report dust grains early detection at about 800 km from the nucleus in August 2014 and the following measurements that allowed us characterizing the 67P/C-G dust environment at distances less than 100 km from the nucleus and single grains dynamical properties. Acknowledgements. GIADA was built by a consortium led by the Univ. Napoli "Parthenope" & INAF-Oss. Astr. Capodimonte, IT, in collaboration with the Inst. de Astrofisica de Andalucia, ES, Selex-ES s.p.a. and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with a support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developped from a PI proposal supported by the University of Kent; sci. & tech. contribution given by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project

  15. A parallel Lanczos method for symmetric generalized eigenvalue problems

    SciTech Connect

    Wu, K.; Simon, H.D.

    1997-12-01

    Lanczos algorithm is a very effective method for finding extreme eigenvalues of symmetric matrices. It requires less arithmetic operations than similar algorithms, such as, the Arnoldi method. In this paper, the authors present their parallel version of the Lanczos method for symmetric generalized eigenvalue problem, PLANSO. PLANSO is based on a sequential package called LANSO which implements the Lanczos algorithm with partial re-orthogonalization. It is portable to all parallel machines that support MPI and easy to interface with most parallel computing packages. Through numerical experiments, they demonstrate that it achieves similar parallel efficiency as PARPACK, but uses considerably less time.

  16. Improving Between-Shot Fusion Data Analysis with Parallel Structures

    SciTech Connect

    CHET NIETER

    2005-07-27

    In the Phase I project we concentrated on three technical objectives to demonstrate the feasibility of the Phase II project: (1) the development of a parallel MDSplus data handler, (2) the parallelization of existing fusion data analysis packages, and (3) the development of techniques to automatically generate parallelized code using pre-compiler directives. We summarize the results of the Phase I research for each of these objectives below. We also describe below additional accomplishments related to the development of the TaskDL and mpiDL parallelization packages.

  17. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  18. CH Packaging Operations Manual

    SciTech Connect

    Washington TRU Solutions LLC

    2005-06-13

    This procedure provides instructions for assembling the CH Packaging Drum payload assembly, Standard Waste Box (SWB) assembly, Abnormal Operations and ICV and OCV Preshipment Leakage Rate Tests on the packaging seals, using a nondestructive Helium (He) Leak Test.

  19. Creative Thinking Package

    ERIC Educational Resources Information Center

    Jones, Clive

    1972-01-01

    A look at the latest package from a British managment training organization, which explains and demonstrates creative thinking techniques, including brainstorming. The package, designed for groups of twelve or more, consists of tapes, visuals, and associated exercises. (Editor/JB)

  20. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele; Antonini, David

    2008-01-01

    This viewgraph presentation describes a comparative packaging study for use on long duration space missions. The topics include: 1) Purpose; 2) Deliverables; 3) Food Sample Selection; 4) Experimental Design Matrix; 5) Permeation Rate Comparison; and 6) Packaging Material Information.

  1. ADVANCED ELECTRONIC PACKAGING TECHNIQUES

    DTIC Science & Technology

    MICROMINIATURIZATION (ELECTRONICS), *PACKAGED CIRCUITS, CIRCUITS, EXPERIMENTAL DATA, MANUFACTURING, NONDESTRUCTIVE TESTING, RESISTANCE (ELECTRICAL), SEMICONDUCTORS, TESTS, THIN FILMS (STORAGE DEVICES), WELDING.

  2. Trends in Food Packaging.

    ERIC Educational Resources Information Center

    Ott, Dana B.

    1988-01-01

    This article discusses developments in food packaging, processing, and preservation techniques in terms of packaging materials, technologies, consumer benefits, and current and potential food product applications. Covers implications due to consumer life-style changes, cost-effectiveness of packaging materials, and the ecological impact of…

  3. Extended precision software packages

    NASA Technical Reports Server (NTRS)

    Phillips, E. J.

    1972-01-01

    A description of three extended precision packages is presented along with three small conversion subroutines which can be used in conjunction with the extended precision packages. These extended packages represent software packages written in FORTRAN 4. They contain normalized or unnormalized floating point arithmetic with symmetric rounding and arbitrary mantissa lengths, and normalized floating point interval arithmetic with appropriate rounding. The purpose of an extended precision package is to enable the user to use and manipulate numbers with large decimal places as well as those with small decimal places where precision beyond double precision is required.

  4. Linked-View Parallel Coordinate Plot Renderer

    SciTech Connect

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  5. Advancements in meat packaging.

    PubMed

    McMillin, Kenneth W

    2017-10-01

    Packaging of meat provides the same or similar benefits for raw chilled and processed meats as other types of food packaging. Although air-permeable packaging is most prevalent for raw chilled red meat, vacuum and modified atmosphere packaging offer longer shelf life. The major advancements in meat packaging have been in the widely used plastic polymers while biobased materials and their integration into composite packaging are receiving much attention for functionality and sustainability. At this time, active and intelligent packaging are not widely used for antioxidant, antimicrobial, and other functions to stabilize and enhance meat properties although many options are being developed and investigated. The advances being made in nanotechnology will be incorporated into food packaging and presumably into meat packaging when appropriate and useful. Intelligent packaging using sensors for transmission of desired information and prompting of subsequent changes in packaging materials, environments or the products to maintain safety and quality are still in developmental stages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Edible packaging materials.

    PubMed

    Janjarasskul, Theeranun; Krochta, John M

    2010-01-01

    Research groups and the food and pharmaceutical industries recognize edible packaging as a useful alternative or addition to conventional packaging to reduce waste and to create novel applications for improving product stability, quality, safety, variety, and convenience for consumers. Recent studies have explored the ability of biopolymer-based food packaging materials to carry and control-release active compounds. As diverse edible packaging materials derived from various by-products or waste from food industry are being developed, the dry thermoplastic process is advancing rapidly as a feasible commercial edible packaging manufacturing process. The employment of nanocomposite concepts to edible packaging materials promises to improve barrier and mechanical properties and facilitate effective incorporation of bioactive ingredients and other designed functions. In addition to the need for a more fundamental understanding to enable design to desired specifications, edible packaging has to overcome challenges such as regulatory requirements, consumer acceptance, and scaling-up research concepts to commercial applications.

  7. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  8. Large area LED package

    NASA Astrophysics Data System (ADS)

    Goullon, L.; Jordan, R.; Braun, T.; Bauer, J.; Becker, F.; Hutter, M.; Schneider-Ramelow, M.; Lang, K.-D.

    2015-03-01

    Solid state lighting using LED-dies is a rapidly growing market. LED-dies with the needed increasing luminous flux per chip area produce a lot of heat. Therefore an appropriate thermal management is required for general lighting with LEDdies. One way to avoid overheating and shorter lifetime is the use of many small LED-dies on a large area heat sink (down to 70 μm edge length), so that heat can spread into a large area while at the same time light also appears on a larger area. The handling with such small LED-dies is very difficult because they are too small to be picked with common equipment. Therefore a new concept called collective transfer bonding using a temporary carrier chip was developed. A further benefit of this new technology is the high precision assembly as well as the plane parallel assembly of the LED-dies which is necessary for wire bonding. It has been shown that hundred functional LED-dies were transferred and soldered at the same time. After the assembly a cost effective established PCB-technology was applied to produce a large-area light source consisting of many small LED-dies and electrically connected on a PCB-substrate. The top contacts of the LED-dies were realized by laminating an adhesive copper sheet followed by LDI structuring as known from PCB-via-technology. This assembly can be completed by adding converting and light forming optical elements. In summary two technologies based on standard SMD and PCB technology have been developed for panel level LED packaging up to 610x 457 mm2 area size.

  9. Parallel hypergraph partitioning for scientific computing.

    SciTech Connect

    Heaphy, Robert; Devine, Karen Dragon; Catalyurek, Umit; Bisseling, Robert; Hendrickson, Bruce Alan; Boman, Erik Gunnar

    2005-07-01

    Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster.

  10. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  11. Packaged die heater

    DOEpatents

    Spielberger, Richard; Ohme, Bruce Walker; Jensen, Ronald J.

    2011-06-21

    A heater for heating packaged die for burn-in and heat testing is described. The heater may be a ceramic-type heater with a metal filament. The heater may be incorporated into the integrated circuit package as an additional ceramic layer of the package, or may be an external heater placed in contact with the package to heat the die. Many different types of integrated circuit packages may be accommodated. The method provides increased energy efficiency for heating the die while reducing temperature stresses on testing equipment. The method allows the use of multiple heaters to heat die to different temperatures. Faulty die may be heated to weaken die attach material to facilitate removal of the die. The heater filament or a separate temperature thermistor located in the package may be used to accurately measure die temperature.

  12. Smart packaging for photonics

    SciTech Connect

    Smith, J.H.; Carson, R.F.; Sullivan, C.T.; McClellan, G.; Palmer, D.W.

    1997-09-01

    Unlike silicon microelectronics, photonics packaging has proven to be low yield and expensive. One approach to make photonics packaging practical for low cost applications is the use of {open_quotes}smart{close_quotes} packages. {open_quotes}Smart{close_quotes} in this context means the ability of the package to actuate a mechanical change based on either a measurement taken by the package itself or by an input signal based on an external measurement. One avenue of smart photonics packaging, the use of polysilicon micromechanical devices integrated with photonic waveguides, was investigated in this research (LDRD 3505.340). The integration of optical components with polysilicon surface micromechanical actuation mechanisms shows significant promise for signal switching, fiber alignment, and optical sensing applications. The optical and stress properties of the oxides and nitrides considered for optical waveguides and how they are integrated with micromechanical devices were investigated.

  13. First in-situ detection of the cometary ammonium ion NH_4+ (protonated ammonia NH3) in the coma of 67P/C-G near perihelion

    NASA Astrophysics Data System (ADS)

    Beth, A.; Altwegg, K.; Balsiger, H.; Berthelier, J.-J.; Calmonte, U.; Combi, M. R.; De Keyser, J.; Dhooghe, F.; Fiethe, B.; Fuselier, S. A.; Galand, M.; Gasc, S.; Gombosi, T. I.; Hansen, K. C.; Hässig, M.; Héritier, K. L.; Kopp, E.; Le Roy, L.; Mandt, K. E.; Peroy, S.; Rubin, M.; Sémon, T.; Tzou, C.-Y.; Vigren, E.

    2016-11-01

    In this paper, we report the first in situ detection of the ammonium ion NH_4+ at 67P/Churyumov-Gerasimenko (67P/C-G) in a cometary coma, using the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA)/Double Focusing Mass Spectrometer (DFMS). Unlike neutral and ion spectrometers onboard previous cometary missions, the ROSINA/DFMS spectrometer, when operated in ion mode, offers the capability to distinguish NH_4+ from H2O+ in a cometary coma. We present here the ion data analysis of mass-to-charge ratios 18 and 19 at high spectral resolution and compare the results with an ionospheric model to put these results into context. The model confirms that the ammonium ion NH_4+ is one of the most abundant ion species, as predicted, in the coma near perihelion.

  14. First in-situ detection of the cometary ammonium ion NH_4+ (protonated ammonia NH3) in the coma of 67P/C-G near perihelion

    NASA Astrophysics Data System (ADS)

    Beth, A.; Altwegg, K.; Balsiger, H.; Berthelier, J.-J.; Calmonte, U.; Combi, M. R.; De Keyser, J.; Dhooghe, F.; Fiethe, B.; Fuselier, S. A.; Galand, M.; Gasc, S.; Gombosi, T. I.; Hansen, K. C.; Hässig, M.; Héritier, K. L.; Kopp, E.; Le Roy, L.; Mandt, K. E.; Peroy, S.; Rubin, M.; Sémon, T.; Tzou, C.-Y.; Vigren, E.

    2017-01-01

    In this paper, we report the first in-situ detection of the ammonium ion NH_4+ at 67P/Churyumov-Gerasimenko (67P/C-G) in a cometary coma, using the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) / Double Focusing Mass Spectrometer (DFMS). Unlike neutral and ion spectrometers onboard previous cometary missions, the ROSINA/DFMS spectrometer, when operated in ion mode, offers the capability to distinguish NH_4+ from H2O+ in a cometary coma. We present here the ion data analysis of mass-to-charge ratios 18 and 19 at high spectral resolution and compare the results with an ionospheric model to put the these results into context. The model confirms that the ammonium ion NH_4+ is one of the most abundant ion species, as predicted, in the coma near perihelion.

  15. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  16. GENERAL PURPOSE ADA PACKAGES

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package

  17. Paperless Work Package Application

    SciTech Connect

    Kilgore, Jr., William R.; Morrell, Jr., Otto K.; Morrison, Dan; Ferrell, Jerrod; Connelley, Sherry; Hall, Tommy; Carzoli, Mary; Hott, Ken; Zylka, Sandy; Wong, Roger; Dang, Ling; Kalyani, Nik; Pearson, Terry; Rogers, Mark; Mannis, Nathan; Bakke, Dave; Shoner, Bruce; Vogel, Loring; Davis, Pat; Hitselberger, Charlie

    2014-07-31

    Paperless Work Package (PWP) System is a computer program process that takes information from Asset Suite, provides a platform for other electronic inputs, Processes the inputs into an electronic package that can be downloaded onto an electronic work tablet or laptop computer, provides a platform for electronic inputs into the work tablet, and then transposes those inputs back into Asset Suite and to permanent SRS records. The PWP System will basically eliminate paper requirements from the maintenance work control system. The program electronically relays the instructions given by the planner to work on a piece of equipment which is currently relayed via a printed work package. The program does not control/approve what is done. The planner will continue to plan the work package, the package will continue to be routed, approved, and scheduled. The supervisor reviews and approves the work to be performed and assigns work to individuals or to a work group. (The supervisor conducts pre job briefings with the workers involved in the job) The Operations Manager (Work Controlling Entity) approves the work package electronically for the work that will be done in his facility prior to work starting. The PWP System will provide the package in an electronic form. All the reviews, approvals, and safety measures taken by people outside the electronic package does not change from the paper driven work packages.

  18. GENERAL PURPOSE ADA PACKAGES

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package

  19. The ZOOM minimization package

    SciTech Connect

    Fischler, Mark S.; Sachs, D.; /Fermilab

    2004-11-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.

  20. Exploring the perspectives of potential consumers and healthcare professionals on the readability of a package insert: a case study of an over-the-counter medicine.

    PubMed

    Pires, Carla M; Cavaco, Afonso M

    2014-05-01

    To explore and compare the opinions of physicians, pharmacists and potential users on the readability of a package insert of an over-the-counter medicine. Exploratory study based on the administration of a semi-open questionnaire. This instrument was developed according to the readability guideline of the European Medicine Agency (EMA) and used to evaluate participants' accessibility to, and comprehensibility of, the package insert for diclofenac 12.5 mg tablets. Sixty-three participants were recruited from the Lisbon region and enrolled in three groups: physicians (Dg), pharmacists (Pg) and potential consumers (PCg), with a minimum of 20 participants each. Almost all (85 %) of the 20 PCg participants were educated above the 9th grade, although the majority of them (95 %) referred to, at least, one package insert interpretation issue, mainly related to the comprehension of technical terms. Amongst other differences between the groups, the Pg participants (n = 22) obtained a significantly less favourable opinion regarding the layout of the titles. Furthermore, the Pg and Dg (n = 21) participants proposed technical enhancements, such as the use of a table to explain the posology, precautions in case of renal failure, or the recommendation to take the tablets with meals. Differences in the way of using the diclofenac tablets are expected, considering the comprehension dissimilarities between health professionals and potential consumers. The package insert of diclofenac 12.5 mg could be enhanced for safer use. Regarding the readability assessment of this package insert, the method proposed in the EMA guidelines might not be as effective as expected. Future research is advisable.

  1. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  2. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  3. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  4. Project Information Packages: Overview.

    ERIC Educational Resources Information Center

    RMC Research Corp., Mountain View, CA.

    This brochure describes a new series of Project Information Packages, a U.S. Office of Education response to the need for a systematic approach to disseminating exemplary projects. The packages describe procedures for developing the necessary administrative support and management framework, as well as instructional methods and techniques. The six…

  5. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  6. Packaging issues: avoiding delamination.

    PubMed

    Hall, R

    2005-10-01

    Manufacturers can minimise delamination occurrence by applying the appropriate packaging design and process features. The end user can minimise the impact of fibre tear and reduce subsequent delamination by careful package opening. The occasional inconvenient delamination is a small price to pay for the high level of sterility assurance that comes with the use of Tyvek.

  7. The LCDROOT Analysis Package

    SciTech Connect

    Abe, Toshinori

    2001-10-18

    The North American Linear Collider Detector group has developed simulation and analysis program packages. LCDROOT is one of the packages, and is based on ROOT and the C++ programing language to maximally benefit from object oriented programming techniques. LCDROOT is constantly improved and now has a new topological vertex finder, ZVTOP3. In this proceeding, the features of the LCDROOT simulation are briefly described.

  8. WASTE PACKAGE TRANSPORTER DESIGN

    SciTech Connect

    D.C. Weddle; R. Novotny; J. Cron

    1998-09-23

    The purpose of this Design Analysis is to develop preliminary design of the waste package transporter used for waste package (WP) transport and related functions in the subsurface repository. This analysis refines the conceptual design that was started in Phase I of the Viability Assessment. This analysis supports the development of a reliable emplacement concept and a retrieval concept for license application design. The scope of this analysis includes the following activities: (1) Assess features of the transporter design and evaluate alternative design solutions for mechanical components. (2) Develop mechanical equipment details for the transporter. (3) Prepare a preliminary structural evaluation for the transporter. (4) Identify and recommend the equipment design for waste package transport and related functions. (5) Investigate transport equipment interface tolerances. This analysis supports the development of the waste package transporter for the transport, emplacement, and retrieval of packaged radioactive waste forms in the subsurface repository. Once the waste containers are closed and accepted, the packaged radioactive waste forms are termed waste packages (WP). This terminology was finalized as this analysis neared completion; therefore, the term disposal container is used in several references (i.e., the System Description Document (SDD)) (Ref. 5.6). In this analysis and the applicable reference documents, the term ''disposal container'' is synonymous with ''waste package''.

  9. RH Packaging Operations Manual

    SciTech Connect

    Washington TRU Solutions LLC

    2003-09-17

    This procedure provides operating instructions for the RH-TRU 72-B Road Cask, Waste Shipping Package. In this document, ''Packaging'' refers to the assembly of components necessary to ensure compliance with the packaging requirements (not loaded with a payload). ''Package'' refers to a Type B packaging that, with its radioactive contents, is designed to retain the integrity of its containment and shielding when subject to the normal conditions of transport and hypothetical accident test conditions set forth in 10 CFR Part 71. Loading of the RH 72-B cask can be done two ways, on the RH cask trailer in the vertical position or by removing the cask from the trailer and loading it in a facility designed for remote-handling (RH). Before loading the 72-B cask, loading procedures and changes to the loading procedures for the 72-B cask must be sent to CBFO at sitedocuments@wipp.ws for approval.

  10. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  11. Packaging Concerns/Techniques for Large Devices

    NASA Technical Reports Server (NTRS)

    Sampson, Michael J.

    2009-01-01

    This slide presentation reviews packaging challenges and options for electronic parts. The presentation includes information about non-hermetic packages, space challenges for packaging and complex package variations.

  12. STRUMPACK -- STRUctured Matrices PACKage

    SciTech Connect

    2014-12-01

    STRUMPACK - STRUctured Matrices PACKage - is a package for computations with sparse and dense structured matrix, i.e., matrices that exhibit some kind of low-rank property, in particular Hierarchically Semi Separable structure (HSS). Such matrices appear in many applications, e.g., FEM, BEM, Integral equations. etc. Exploiting this structure using certain compression algorithms allow for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. STRUMPACK has presently two main components: a distributed-memory dense matrix computations package and a shared-memory sparse direct solver.

  13. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  14. Packaging for Posterity.

    ERIC Educational Resources Information Center

    Sias, Jim

    1990-01-01

    A project in which students designed environmentally responsible food packaging is described. The problem definition; research on topics such as waste paper, plastic, metal, glass, incineration, recycling, and consumer preferences; and the presentation design are provided. (KR)

  15. Packaging for Posterity.

    ERIC Educational Resources Information Center

    Sias, Jim

    1990-01-01

    A project in which students designed environmentally responsible food packaging is described. The problem definition; research on topics such as waste paper, plastic, metal, glass, incineration, recycling, and consumer preferences; and the presentation design are provided. (KR)

  16. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2002-03-04

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT Shipping Package, and directly related components. This document complies with the minimum requirements as specified in TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event there is a conflict between this document and the SARP or C of C, the SARP and/or C of C shall govern. C of Cs state: ''each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application.'' They further state: ''each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SAR P charges the WIPP Management and Operation (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with 10 CFR 71.11. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document details the instructions to be followed to operate, maintain, and test the TRUPACT-II and HalfPACT packaging. The intent of these instructions is to standardize these operations. All users will follow these instructions or equivalent instructions that assure operations are safe and meet the requirements of the SARPs.

  17. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2003-04-30

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: ''each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application.'' They further state: ''each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SARP charges the WIPP management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with 10 CFR 71.11. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document provides the instructions to be followed to operate, maintain, and test the TRUPACT-II and HalfPACT packaging. The intent of these instructions is to standardize operations. All users will follow these instructions or equivalent instructions that assure operations are safe and meet the requirements of the SARPs.

  18. Battery packaging - Technology review

    SciTech Connect

    Maiser, Eric

    2014-06-16

    This paper gives a brief overview of battery packaging concepts, their specific advantages and drawbacks, as well as the importance of packaging for performance and cost. Production processes, scaling and automation are discussed in detail to reveal opportunities for cost reduction. Module standardization as an additional path to drive down cost is introduced. A comparison to electronics and photovoltaics production shows 'lessons learned' in those related industries and how they can accelerate learning curves in battery production.

  19. TEX macro packages

    SciTech Connect

    Poggio, M.E.

    1985-02-26

    This manual is documentation for the macro packages that are available with the TEX82 system distributed by the Computer Systems Research Group, Engineering Research Division, Electronics Engineering Department, Lawrence Livermore National Laboratory. TEX is a computerized typesetting system created by Professor Donald E. Knuth at Stanford University. Macro packages have been developed to extend the capabilities of TEX and aid the user in generating various types of output (e.g., chapter format, letter format, memo format, and viewgraphs).

  20. The CONSERT Instrument during Philae's Descent onto 67P/C-G's surface: Insights on Philae's Attitude and the Surface Permittivity Measurements at the Agilkia-Landing-Site

    NASA Astrophysics Data System (ADS)

    Plettemeier, D.; Statz, C.; Hahnel, R.; Hegler, S.; Herique, A.; Pasquero, P.; Rogez, Y.; Zine, S.; Ciarletti, V.; Kofman, W. W.

    2015-12-01

    The main scientific objective of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard ESA spacecraft Rosetta is the dielectric characterization of comet 67P/Churyumov-Gerasimenko's nucleus. This was done by means of bi-static radio propagation measurements of the CONSERT instrument between the lander Philae launched onto the comet's surface and the orbiter Rosetta. The CONSERT unit aboard the lander was receiving and processing the radio signal emitted by the orbiter counterpart of the instrument. The lander unit was then retransmitting a signal back to the orbiter. This happened at a time scale of milliseconds. In addition to the operation at the first science sequence, CONSERT was operated during the separation and descent of Philae onto the comet's surface. During the descent phase of Philae the received CONSERT signal was a superposition of the direct propagation path between Rosetta and Philae and indirect paths caused by reflections of 67P/C-G's surface. From peak power measurements of the dominant direct path between Rosetta and Philae during the descent we were able to reconstruct the lander's attitude and estimate the spin rate of the lander along the descent trajectory. Certain operations and manoeuvres of orbiter and lander, e.g. the deployment of the lander legs and CONSERT antennas or the orbiter change of attitude in order to orient the science towards the assumed lander position, are also visible in the CONSERT data. The information gained on the landers attitude is used in the reconstruction of the dielectric properties of 67P/C-G's surface and near subsurface (metric to decametric scale). During roughly the last third of the descent, the comet's surface is visible for the CONSERT instrument enabling a mean permittivity estimation of the surface and near subsurface covered by the instruments footprint along the descent path. The comparatively large timespan with surface signatures exhibits a good spatial diversity

  1. The Polycomb group (PcG) protein EZH2 supports the survival of PAX3-FOXO1 alveolar rhabdomyosarcoma by repressing FBXO32 (Atrogin1/MAFbx).

    PubMed

    Ciarapica, R; De Salvo, M; Carcarino, E; Bracaglia, G; Adesso, L; Leoncini, P P; Dall'Agnese, A; Walters, Z S; Verginelli, F; De Sio, L; Boldrini, R; Inserra, A; Bisogno, G; Rosolen, A; Alaggio, R; Ferrari, A; Collini, P; Locatelli, M; Stifani, S; Screpanti, I; Rutella, S; Yu, Q; Marquez, V E; Shipley, J; Valente, S; Mai, A; Miele, L; Puri, P L; Locatelli, F; Palacios, D; Rota, R

    2014-08-07

    The Polycomb group (PcG) proteins regulate stem cell differentiation via the repression of gene transcription, and their deregulation has been widely implicated in cancer development. The PcG protein Enhancer of Zeste Homolog 2 (EZH2) works as a catalytic subunit of the Polycomb Repressive Complex 2 (PRC2) by methylating lysine 27 on histone H3 (H3K27me3), a hallmark of PRC2-mediated gene repression. In skeletal muscle progenitors, EZH2 prevents an unscheduled differentiation by repressing muscle-specific gene expression and is downregulated during the course of differentiation. In rhabdomyosarcoma (RMS), a pediatric soft-tissue sarcoma thought to arise from myogenic precursors, EZH2 is abnormally expressed and its downregulation in vitro leads to muscle-like differentiation of RMS cells of the embryonal variant. However, the role of EZH2 in the clinically aggressive subgroup of alveolar RMS, characterized by the expression of PAX3-FOXO1 oncoprotein, remains unknown. We show here that EZH2 depletion in these cells leads to programmed cell death. Transcriptional derepression of F-box protein 32 (FBXO32) (Atrogin1/MAFbx), a gene associated with muscle homeostasis, was evidenced in PAX3-FOXO1 RMS cells silenced for EZH2. This phenomenon was associated with reduced EZH2 occupancy and H3K27me3 levels at the FBXO32 promoter. Simultaneous knockdown of FBXO32 and EZH2 in PAX3-FOXO1 RMS cells impaired the pro-apoptotic response, whereas the overexpression of FBXO32 facilitated programmed cell death in EZH2-depleted cells. Pharmacological inhibition of EZH2 by either 3-Deazaneplanocin A or a catalytic EZH2 inhibitor mirrored the phenotypic and molecular effects of EZH2 knockdown in vitro and prevented tumor growth in vivo. Collectively, these results indicate that EZH2 is a key factor in the proliferation and survival of PAX3-FOXO1 alveolar RMS cells working, at least in part, by repressing FBXO32. They also suggest that the reducing activity of EZH2 could represent a novel

  2. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele H.; Oziomek, Thomas V.

    2009-01-01

    Future long duration manned space flights beyond low earth orbit will require the food system to remain safe, acceptable and nutritious. Development of high barrier food packaging will enable this requirement by preventing the ingress and egress of gases and moisture. New high barrier food packaging materials have been identified through a trade study. Practical application of this packaging material within a shelf life test will allow for better determination of whether this material will allow the food system to meet given requirements after the package has undergone processing. The reason to conduct shelf life testing, using a variety of packaging materials, stems from the need to preserve food used for mission durations of several years. Chemical reactions that take place during longer durations may decrease food quality to a point where crew physical or psychological well-being is compromised. This can result in a reduction or loss of mission success. The rate of chemical reactions, including oxidative rancidity and staling, can be controlled by limiting the reactants, reducing the amount of energy available to drive the reaction, and minimizing the amount of water available. Water not only acts as a media for microbial growth, but also as a reactant and means by which two reactants may come into contact with each other. The objective of this study is to evaluate three packaging materials for potential use in long duration space exploration missions.

  3. A survey of packages for large linear systems

    SciTech Connect

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to very large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user

  4. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  5. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2006-11-07

    The purpose of this program guidance document is to provide the technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the C of C shall govern. The C of C states: "...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." It further states: "...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) Contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with 10 Code of Federal Regulations (CFR) §71.8, "Deliberate Misconduct." Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the U.S. Department of Energy (DOE) Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, "Packaging and Transportation of Radioactive Material," certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21, "Reporting of Defects and Noncompliance," regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to

  6. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2008-01-12

    The purpose of this program guidance document is to provide the technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package (also known as the "RH-TRU 72-B cask") and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the C of C shall govern. The C of C states: "...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." It further states: "...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) Contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8, "Deliberate Misconduct." Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the U.S. Department of Energy (DOE) Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, "Packaging and Transportation of Radioactive Material," certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21, "Reporting of Defects and Noncompliance," regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a

  7. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2009-06-01

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  8. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2008-09-11

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the pplication." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  9. Parallel CFD design on network-based computer

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advanced computational fluid dynamics codes, which can be computationally expensive on mainframe supercomputers. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computing environment utilizing a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package is applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  10. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; VanDalsem, William (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  11. Parallel CFD design on network-based computer

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advanced computational fluid dynamics codes, which can be computationally expensive on mainframe supercomputers. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computing environment utilizing a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package is applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  12. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; Holst, Terry L. (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  13. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  14. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2005-02-28

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.

  15. Food Packaging Materials

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The photos show a few of the food products packaged in Alure, a metallized plastic material developed and manufactured by St. Regis Paper Company's Flexible Packaging Division, Dallas, Texas. The material incorporates a metallized film originally developed for space applications. Among the suppliers of the film to St. Regis is King-Seeley Thermos Company, Winchester, Ma'ssachusetts. Initially used by NASA as a signal-bouncing reflective coating for the Echo 1 communications satellite, the film was developed by a company later absorbed by King-Seeley. The metallized film was also used as insulating material for components of a number of other spacecraft. St. Regis developed Alure to meet a multiple packaging material need: good eye appeal, product protection for long periods and the ability to be used successfully on a wide variety of food packaging equipment. When the cost of aluminum foil skyrocketed, packagers sought substitute metallized materials but experiments with a number of them uncovered problems; some were too expensive, some did not adequately protect the product, some were difficult for the machinery to handle. Alure offers a solution. St. Regis created Alure by sandwiching the metallized film between layers of plastics. The resulting laminated metallized material has the superior eye appeal of foil but is less expensive and more easily machined. Alure effectively blocks out light, moisture and oxygen and therefore gives the packaged food long shelf life. A major packaging firm conducted its own tests of the material and confirmed the advantages of machinability and shelf life, adding that it runs faster on machines than materials used in the past and it decreases product waste; the net effect is increased productivity.

  16. Food packages for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Fohey, M. F.; Sauer, R. L.; Westover, J. B.; Rockafeller, E. F.

    1978-01-01

    The paper reviews food packaging techniques used in space flight missions and describes the system developed for the Space Shuttle. Attention is directed to bite-size food cubes used in Gemini, Gemini rehydratable food packages, Apollo spoon-bowl rehydratable packages, thermostabilized flex pouch for Apollo, tear-top commercial food cans used in Skylab, polyethylene beverage containers, Skylab rehydratable food package, Space Shuttle food package configuration, duck-bill septum rehydration device, and a drinking/dispensing nozzle for Space Shuttle liquids. Constraints and testing of packaging is considered, a comparison of food package materials is presented, and typical Shuttle foods and beverages are listed.

  17. Detecting small holes in packages

    DOEpatents

    Kronberg, James W.; Cadieux, James R.

    1996-01-01

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package.

  18. Detecting small holes in packages

    DOEpatents

    Kronberg, J.W.; Cadieux, J.R.

    1996-03-19

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package are disclosed. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package. 3 figs.

  19. DMA Modulus as a Screening Parameter for Compatibility of Polymeric Containment Materials with Various Solutions for use in Space Shuttle Microgravity Protein Crystal Growth (PCG) Experiments

    NASA Technical Reports Server (NTRS)

    Wingard, Charles Doug; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Protein crystals are grown in microgravity experiments inside the Space Shuttle during orbit. Such crystals are basically grown in a five-component system containing a salt, buffer, polymer, organic and water. During these experiments, a number of different polymeric containment materials must be compatible with up to hundreds of different PCG solutions in various concentrations for durations up to 180 days. When such compatibility experiments are performed at NASA/MSFC (Marshall Space Flight Center) simultaneously on containment material samples immersed in various solutions in vials, the samples are rather small out of necessity. DMA4 modulus was often used as the primary screening parameter for such small samples as a pass/fail criterion for incompatibility issues. In particular, the TA Instruments DMA 2980 film tension clamp was used to test rubber O-rings as small in I.D. as 0.091 in. by cutting through the cross-section at one place, then clamping the stretched linear cord stock at each end. The film tension clamp was also used to successfully test short length samples of medical/surgical grade tubing with an O.D. of 0.125 in.

  20. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2006-04-25

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package TransporterModel II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant| (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations(CFR) §71.8. Any time a user suspects or has indications that the conditions ofapproval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  1. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2007-12-13

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  2. MMIC packaging with Waffleline

    NASA Astrophysics Data System (ADS)

    Perry, R. W.; Ellis, T. T.; Schineller, E. R.

    1990-06-01

    The design principle of Waffleline, a patented MMIC packaging technology, is discussed, and several recent applications are described and illustrated with drawings, diagrams, and photographs. Standard Waffleline is a foil-covered waffle-iron-like grid with dielectric-coated signal and power wires running in the channels and foil-removed holes for mounting prepackaged chips or chip carriers. With spacing of 50 mils between center conductors, this material is applicable at frequencies up to 40 GHz; EHF devices require Waffleline with 25-mil spacing. Applications characterized include a subassembly for a man-transportable SHF satellite-communication terminal, a transmitter driver for a high-power TWT, and a 60-GHz receiver front end (including an integrated monolithic microstrip antenna, a low-noise amplifier, a mixer, and an IF amplifier in a 0.25-inch-thick 1.6-inch-diameter package). The high package density and relatively low cost of Waffleline are emphasized.

  3. Waste package reliability analysis

    SciTech Connect

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table.

  4. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  5. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  6. Ada Namelist Package

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.

    1991-01-01

    Ada Namelist Package, developed for Ada programming language, enables calling program to read and write FORTRAN-style namelist files. Features are: handling of any combination of types defined by user; ability to read vectors, matrices, and slices of vectors and matrices; handling of mismatches between variables in namelist file and those in programmed list of namelist variables; and ability to avoid searching entire input file for each variable. Principle benefits derived by user: ability to read and write namelist-readable files, ability to detect most file errors in initialization phase, and organization keeping number of instantiated units to few packages rather than to many subprograms.

  7. SPHINX experimenters information package

    SciTech Connect

    Zarick, T.A.

    1996-08-01

    This information package was prepared for both new and experienced users of the SPHINX (Short Pulse High Intensity Nanosecond X-radiator) flash X-Ray facility. It was compiled to help facilitate experiment design and preparation for both the experimenter(s) and the SPHINX operational staff. The major areas covered include: Recording Systems Capabilities,Recording System Cable Plant, Physical Dimensions of SPHINX and the SPHINX Test cell, SPHINX Operating Parameters and Modes, Dose Rate Map, Experiment Safety Approval Form, and a Feedback Questionnaire. This package will be updated as the SPHINX facilities and capabilities are enhanced.

  8. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  9. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  10. Introduction to parallel programming

    SciTech Connect

    Brawer, S. )

    1989-01-01

    This book describes parallel programming and all the basic concepts illustrated by examples in a simplified FORTRAN. Concepts covered include: The parallel programming model; The creation of multiple processes; Memory sharing; Scheduling; Data dependencies. In addition, a number of parallelized applications are presented, including a discrete-time, discrete-event simulator, numerical integration, Gaussian elimination, and parallelized versions of the traveling salesman problem and the exploration of a maze.

  11. Future trends in electronic packaging

    NASA Astrophysics Data System (ADS)

    Elshabini, Aicha; Wang, Gangqiang; Barlow, Fred

    2006-03-01

    Electronic packaging is traditionally defined as the back-end process that transforms bare integrated circuits (IC) into functional products. As the IC feature size decreases and the size of silicon wafer increases, the cost per IC is reduced and the performance is enhanced. The future IC chips will be larger in size, have more input/output terminals (I/Os), and require higher power. In addition to the advancements in IC technology, electronic packaging is also driven by the market requirements for low cost, small size, and multi-functional electronic products. In response to these requirements, packaging related areas such as design, packaging architectures, materials, processes, and manufacturing equipment are all changing rapidly. Wafer-level packaging (WLP) offers the benefits of low cost and smallest size for single chip packages, since the package is done at wafer level other than individual die. After packages reach the horizontal limit of dimensions, 3D stacking solution provides more efficient packages through expanding packages in the vertical dimension. Functional integration is achieved with 3D stacking architectures. System in package (SiP), one of the solutions to system integration, incorporates electronics, non-electronic devices such as optical devices, biological devices, micro-electro-mechanical systems (MEMS), etc, and interconnection in a single package, to form smart structures or microsystems. MEMS devices require specialized packaging to serve new market applications. This paper and presentation describe the technology requirements and challenges of these advancing packaging areas. The potential solutions and future trends are presented.

  12. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  13. AN ADA NAMELIST PACKAGE

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested

  14. AN ADA NAMELIST PACKAGE

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested

  15. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  16. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  17. The gputools package enables GPU computing in R.

    PubMed

    Buckner, Joshua; Wilson, Justin; Seligman, Mark; Athey, Brian; Watson, Stanley; Meng, Fan

    2010-01-01

    By default, the R statistical environment does not make use of parallelism. Researchers may resort to expensive solutions such as cluster hardware for large analysis tasks. Graphics processing units (GPUs) provide an inexpensive and computationally powerful alternative. Using R and the CUDA toolkit from Nvidia, we have implemented several functions commonly used in microarray gene expression analysis for GPU-equipped computers. R users can take advantage of the better performance provided by an Nvidia GPU. The package is available from CRAN, the R project's repository of packages, at http://cran.r-project.org/web/packages/gputools More information about our gputools R package is available at http://brainarray.mbni.med.umich.edu/brainarray/Rgpgpu

  18. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-01-01

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  19. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-11-04

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  20. Metric Education Evaluation Package.

    ERIC Educational Resources Information Center

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  1. Nutrition Learning Packages.

    ERIC Educational Resources Information Center

    World Health Organization, Geneva (Switzerland).

    This book presents nine packages of learning materials for trainers to use in teaching community health workers to carry out the nutrition element of their jobs. Lessons are intended to help health workers acquire skill in presenting to communities the principles and practice of good nutrition. Responding to the most common causes of poor…

  2. Automatic Differentiation Package

    SciTech Connect

    Gay, David M.; Phipps, Eric; Bratlett, Roscoe

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  3. Learning Activity Package, Algebra.

    ERIC Educational Resources Information Center

    Evans, Diane

    A set of ten teacher-prepared Learning Activity Packages (LAPs) in beginning algebra and nine in intermediate algebra, these units cover sets, properties of operations, number systems, open expressions, solution sets of equations and inequalities in one and two variables, exponents, factoring and polynomials, relations and functions, radicals,…

  4. Radiographic film package

    SciTech Connect

    Muylle, W. E.

    1985-08-27

    A radiographic film package for non-destructive testing, comprising a radiographic film sheet, an intensifying screen with a layer of lead bonded to a paper foil, and a vacuum heat-sealed wrapper with a layer of aluminum and a heat-sealed easy-peelable thermoplastic layer.

  5. YWCA Vocational Readiness Package.

    ERIC Educational Resources Information Center

    Scott, Jeanne

    This document outlines, in detail, the Vocational Readiness Package for young girls, which is a week-long program utilizing simulation games and role-playing, while employing peer group counseling techniques to dramatize the realities concerning women in marriage and careers today. After three years of using this program, the authors have compiled…

  6. Packaging Materials Properties Data

    SciTech Connect

    Leduc, D.

    1991-10-30

    Several energy absorbing materials are used in nuclear weapons component shipping containers recently designed for the Y-12 Plant Program Management Packaging Group. As a part of the independent review procedure leading to Certificates of Compliance, the U.S. Department of Energy Technical Safety Review Panels requested compression versus deflection . data on these materials. This report is a compilation of that data.

  7. Packaging materials properties data

    SciTech Connect

    Walker, M.S.

    1991-01-01

    Several energy absorbing materials are used in nuclear weapons component shipping containers recently designed for the Y-12 Plant Program Management Packaging Group. As a part of the independent review procedure leading to Certificates of Compliance, the US Department of Energy Technical Safety Review Panels requested compression versus deflection data on these materials. This report is a compilation of that data.

  8. Project Information Packages Kit.

    ERIC Educational Resources Information Center

    RMC Research Corp., Mountain View, CA.

    Presented are an overview booklet, a project selection guide, and six Project Information Packages (PIPs) for six exemplary projects serving underachieving students in grades k through 9. The overview booklet outlines the PIP projects and includes a chart of major project features. A project selection guide reviews the PIP history, PIP contents,…

  9. Type B drum packages

    SciTech Connect

    McCoy, J.C.

    1994-08-01

    The Type B drum packages (TBD) are conceptualized as a family of containers in which a single 208 L or 114 L (55 gal or 30 gal) drum containing Type B quantities of radioactive material (RAM) can be packaged for shipment. The TBD containers are being developed to fill a void in the packaging and transportation capabilities of the U.S. Department of Energy as no container packaging single drums of Type B RAM exists offering double containment. Several multiple-drum containers currently exist, as well as a number of shielded casks, but the size and weight of these containers present many operational challenges for single-drum shipments. As an alternative, the TBD containers will offer up to three shielded versions (light, medium, and heavy) and one unshielded version, each offering single or optional double containment for a single drum. To reduce operational complexity, all versions will share similar design and operational features where possible. The primary users of the TBD containers are envisioned to be any organization desiring to ship single drums of Type B RAM, such as laboratories, waste retrieval activities, emergency response teams, etc. Currently, the TBD conceptual design is being developed with the final design and analysis to be completed in 1995 to 1996. Testing and certification of the unshielded version are planned to be completed in 1996 to 1997 with production to begin in 1997 to 1998.

  10. Packaging, transportation of LLW

    SciTech Connect

    Shelton, P.

    1994-12-31

    This presentation is an overview of the regulations and requirements for the packaging and transportation of low-level radioactive wastes. United States Environmental Protection Agency and Department of Transportation regulations governing the classification of wastes and the transport documentation are also described.

  11. The Superintendent's Compensation Package.

    ERIC Educational Resources Information Center

    Hertzke, Eugene R.

    Guidelines are presented to help school boards evaluate superintendents and set up their compensation packages. The author describes informal and formal evaluation procedures and states his preference for the latter, since they promote mutual understanding between the superintendent and the board. A flow chart illustrates a superintendent…

  12. Electro-Microfluidic Packaging

    NASA Astrophysics Data System (ADS)

    Benavides, G. L.; Galambos, P. C.

    2002-06-01

    There are many examples of electro-microfluidic products that require cost effective packaging solutions. Industry has responded to a demand for products such as drop ejectors, chemical sensors, and biological sensors. Drop ejectors have consumer applications such as ink jet printing and scientific applications such as patterning self-assembled monolayers or ejecting picoliters of expensive analytes/reagents for chemical analysis. Drop ejectors can be used to perform chemical analysis, combinatorial chemistry, drug manufacture, drug discovery, drug delivery, and DNA sequencing. Chemical and biological micro-sensors can sniff the ambient environment for traces of dangerous materials such as explosives, toxins, or pathogens. Other biological sensors can be used to improve world health by providing timely diagnostics and applying corrective measures to the human body. Electro-microfluidic packaging can easily represent over fifty percent of the product cost and, as with Integrated Circuits (IC), the industry should evolve to standard packaging solutions. Standard packaging schemes will minimize cost and bring products to market sooner.

  13. Waste disposal package

    DOEpatents

    Smith, M.J.

    1985-06-19

    This is a claim for a waste disposal package including an inner or primary canister for containing hazardous and/or radioactive wastes. The primary canister is encapsulated by an outer or secondary barrier formed of a porous ceramic material to control ingress of water to the canister and the release rate of wastes upon breach on the canister. 4 figs.

  14. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  15. High Efficiency Integrated Package

    SciTech Connect

    Ibbetson, James

    2013-09-15

    Solid-state lighting based on LEDs has emerged as a superior alternative to inefficient conventional lighting, particularly incandescent. LED lighting can lead to 80 percent energy savings; can last 50,000 hours – 2-50 times longer than most bulbs; and contains no toxic lead or mercury. However, to enable mass adoption, particularly at the consumer level, the cost of LED luminaires must be reduced by an order of magnitude while achieving superior efficiency, light quality and lifetime. To become viable, energy-efficient replacement solutions must deliver system efficacies of ≥ 100 lumens per watt (LPW) with excellent color rendering (CRI > 85) at a cost that enables payback cycles of two years or less for commercial applications. This development will enable significant site energy savings as it targets commercial and retail lighting applications that are most sensitive to the lifetime operating costs with their extended operating hours per day. If costs are reduced substantially, dramatic energy savings can be realized by replacing incandescent lighting in the residential market as well. In light of these challenges, Cree proposed to develop a multi-chip integrated LED package with an output of > 1000 lumens of warm white light operating at an efficacy of at least 128 LPW with a CRI > 85. This product will serve as the light engine for replacement lamps and luminaires. At the end of the proposed program, this integrated package was to be used in a proof-of-concept lamp prototype to demonstrate the component’s viability in a common form factor. During this project Cree SBTC developed an efficient, compact warm-white LED package with an integrated remote color down-converter. Via a combination of intensive optical, electrical, and thermal optimization, a package design was obtained that met nearly all project goals. This package emitted 1295 lm under instant-on, room-temperature testing conditions, with an efficacy of 128.4 lm/W at a color temperature of ~2873

  16. Developing a training package.

    PubMed

    Minogue, Virginia; Donskoy, Anne-Laure

    2017-06-12

    Purpose The purpose of this paper is to outline the development of a training package for service users and carers with an interest in NHS health and social care research. It demonstrates how the developers used their unique experience and expertise as service users and carers to inform their work. Design/methodology/approach Service users and carers, NHS Research and Development Forum working group members, supported by health professionals, identified a need for research training that was tailored to other service user and carer needs. After reviewing existing provision and drawing on their training and support experience, they developed a training package. Sessions from the training package were piloted, which evaluated positively. In trying to achieve programme accreditation and training roll-out beyond the pilots, the group encountered several challenges. Findings The training package development group formed good working relationships and a co-production model that proved sustainable. However, challenges were difficult to overcome owing to external factors and financial constraints. Practical implications Lessons learnt by the team are useful for other service users and carer groups working with health service professionals. Training for service users and carers should be designed to meet their needs; quality and consistency are also important. The relationships between service user and carer groups, and professionals are important to understanding joint working. Recognising and addressing challenges at the outset can help develop strategies to overcome challenges and ensure project success. Originality/value The training package was developed by service users and carers for other service users and carers. Their unique health research experience underpinned the group's values and training development.

  17. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  18. An Object-Oriented Serial DSMC Simulation Package

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Cai, Chunpei

    2011-05-01

    A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.

  19. Security Package for the VAX

    NASA Technical Reports Server (NTRS)

    Marks, V. J.; Benigue, C. E.

    1983-01-01

    Four programs deal with intruders and resource managment. Package available from COSMIC provides DEC VAX-11/780 with certain "deterent" security features. Although packages is not comprehensive security system, of interest for any VAX installation where security is concern.

  20. Food packaging history and innovations.

    PubMed

    Risch, Sara J

    2009-09-23

    Food packaging has evolved from simply a container to hold food to something today that can play an active role in food quality. Many packages are still simply containers, but they have properties that have been developed to protect the food. These include barriers to oxygen, moisture, and flavors. Active packaging, or that which plays an active role in food quality, includes some microwave packaging as well as packaging that has absorbers built in to remove oxygen from the atmosphere surrounding the product or to provide antimicrobials to the surface of the food. Packaging has allowed access to many foods year-round that otherwise could not be preserved. It is interesting to note that some packages have actually allowed the creation of new categories in the supermarket. Examples include microwave popcorn and fresh-cut produce, which owe their existence to the unique packaging that has been developed.

  1. PAYLOAD PACKAGING DESIGN - ALOUETTE SATELLITE,

    DTIC Science & Technology

    A description of satellite packaging design , discussion of factors influencing design, and enumeration of some practical rules found generally useful during the payload packaging of the Alouette spacecraft is presented. (Author)

  2. Packaging legislation. Objectives and consequences.

    PubMed

    Christmann, H

    1995-05-01

    The recently published Directive on packaging and packaging waste makes new demands on the industry. This article highlights the key areas and raises some of the issues that must be confronted in the future.

  3. Sustainable Library Development Training Package

    ERIC Educational Resources Information Center

    Peace Corps, 2012

    2012-01-01

    This Sustainable Library Development Training Package supports Peace Corps' Focus In/Train Up strategy, which was implemented following the 2010 Comprehensive Agency Assessment. Sustainable Library Development is a technical training package in Peace Corps programming within the Education sector. The training package addresses the Volunteer…

  4. Parenteral packaging waste reduction.

    PubMed

    Baetz, B W

    1990-08-01

    The consumption of pharmaceutical products generates waste materials which can cause significant environmental impact when incinerated or landfilled. The purpose of this work is to stimulate discussion among hospital pharmacists and purchasing managers relating to the waste management aspects of their purchasing decisions. As a case study example, a number of commercially available "single use" parenterals are evaluated from a waste reduction perspective, for both the product container and for the packaging of these containers. Glass vials are non-incinerable, and are currently non-recyclable due to the higher melting temperatures required for borosilicate glass. However, plastic vials are potentially both incinerable and recyclable. Packaging quantities are considerably lower for plastic vials on a unit container basis, and also vary to a measurable degree between different manufacturers for a given type of container material. From an environmental perspective, waste reduction potential should become an important criterion in the selection of pharmaceutical products for hospital use.

  5. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions, LLC

    2003-08-25

    The purpose of this program guidance document is to provide technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the SARP and/or C of C shall govern. The C of C states: ''...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, ''Operating Procedures,'' of the application.'' It further states: ''...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, ''Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC approved, users need to be familiar with 10 CFR {section} 71.11, ''Deliberate Misconduct.'' Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document details the instructions to be followed to operate, maintain, and test the RH-TRU 72-B packaging. This Program Guidance standardizes instructions for all users. Users shall follow these instructions. Following these instructions assures that operations are safe and meet the requirements of the SARP. This document is available on the Internet at: ttp://www.ws/library/t2omi/t2omi.htm. Users are responsible for ensuring they are using the current revision and change notices. Sites may prepare their own document using the word

  6. TIDEV: Tidal Evolution package

    NASA Astrophysics Data System (ADS)

    Cuartas-Restrepo, P.; Melita, M.; Zuluaga, J.; Portilla, B.; Sucerquia, M.; Miloni, O.

    2016-09-01

    TIDEV (Tidal Evolution package) calculates the evolution of rotation for tidally interacting bodies using Efroimsky-Makarov-Williams (EMW) formalism. The package integrates tidal evolution equations and computes the rotational and dynamical evolution of a planet under tidal and triaxial torques. TIDEV accounts for the perturbative effects due to the presence of the other planets in the system, especially the secular variations of the eccentricity. Bulk parameters include the mass and radius of the planet (and those of the other planets involved in the integration), the size and mass of the host star, the Maxwell time and Andrade's parameter. TIDEV also calculates the time scale that a planet takes to be tidally locked as well as the periods of rotation reached at the end of the spin-orbit evolution.

  7. Anticounterfeit packaging technologies

    PubMed Central

    Shah, Ruchir Y.; Prajapati, Prajesh N.; Agrawal, Y. K.

    2010-01-01

    Packaging is the coordinated system that encloses and protects the dosage form. Counterfeit drugs are the major cause of morbidity, mortality, and failure of public interest in the healthcare system. High price and well-known brands make the pharma market most vulnerable, which accounts for top priority cardiovascular, obesity, and antihyperlipidemic drugs and drugs like sildenafil. Packaging includes overt and covert technologies like barcodes, holograms, sealing tapes, and radio frequency identification devices to preserve the integrity of the pharmaceutical product. But till date all the available techniques are synthetic and although provide considerable protection against counterfeiting, have certain limitations which can be overcome by the application of natural approaches and utilization of the principles of nanotechnology. PMID:22247875

  8. Fair Package Assignment

    NASA Astrophysics Data System (ADS)

    Lahaie, Sébastien; Parkes, David C.

    We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.

  9. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  10. Aquaculture information package

    SciTech Connect

    Boyd, T.; Rafferty, K.

    1998-08-01

    This package of information is intended to provide background information to developers of geothermal aquaculture projects. The material is divided into eight sections and includes information on market and price information for typical species, aquaculture water quality issues, typical species culture information, pond heat loss calculations, an aquaculture glossary, regional and university aquaculture offices and state aquaculture permit requirements. A bibliography containing 68 references is also included.

  11. Software packager user's guide

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.

  12. Trilinos Web Interface Package

    SciTech Connect

    Hu, Jonathan; Phenow, Michael N.; Sala, Marzio; Tuminaro, Ray S.

    2006-09-01

    WebTrilinos is a scientific portal, a web-based environment to use several Trilinos packages through the web. If you are a teaching sparse linear algebra, you can use WebTrilinos to present code snippets and simple scripts, and let the students execute them from their browsers. If you want to test linear algebra solvers, you can use the MatrixPortal module, and you just have to select problems and options, then plot the results in nice graphs.

  13. Navy packaging standardization thrusts

    NASA Astrophysics Data System (ADS)

    Kidwell, J. R.

    1982-11-01

    Standardization is a concept that is basic to our world today. The idea of reducing costs through the economics of mass production is an easy one to grasp. Henry Ford started the process of large scale standardization in this country with the Detroit production lines for his automobiles. In the process additional benefits accrued, such as improved reliability through design maturity, off-the-shelf repair parts, faster repair time, and a resultant lower cost of ownership (lower life-cycle cost). The need to attain standardization benefits with military equipments exists now. Defense budgets, although recently increased, are not going to permit us to continue the tremendous investment required to maintain even the status quo and develop new hardware at the same time. Needed are more reliable, maintainable, testable hardware in the Fleet. It is imperative to recognize the obsolescence problems created by the use of high technology devices in our equipments, and find ways to combat these shortfalls. The Navy has two packaging standardization programs that will be addressed in this paper; the Standard Electronic Modules and the Modular Avionics Packaging programs. Following a brief overview of the salient features of each program, the packaging technology aspects of the program will be addressed, and developmental areas currently being investigated will be identified.

  14. The GITEWS ocean bottom sensor packages

    NASA Astrophysics Data System (ADS)

    Boebel, O.; Busack, M.; Flueh, E. R.; Gouretski, V.; Rohr, H.; Macrander, A.; Krabbenhoeft, A.; Motz, M.; Radtke, T.

    2010-08-01

    The German-Indonesian Tsunami Early Warning System (GITEWS) aims at reducing the risks posed by events such as the 26 December 2004 Indian Ocean tsunami. To minimize the lead time for tsunami alerts, to avoid false alarms, and to accurately predict tsunami wave heights, real-time observations of ocean bottom pressure from the deep ocean are required. As part of the GITEWS infrastructure, the parallel development of two ocean bottom sensor packages, PACT (Pressure based Acoustically Coupled Tsunameter) and OBU (Ocean Bottom Unit), was initiated. The sensor package requirements included bidirectional acoustic links between the bottom sensor packages and the hosting surface buoys, which are moored nearby. Furthermore, compatibility between these sensor systems and the overall GITEWS data-flow structure and command hierarchy was mandatory. While PACT aims at providing highly reliable, long term bottom pressure data only, OBU is based on ocean bottom seismometers to concurrently record sea-floor motion, necessitating highest data rates. This paper presents the technical design of PACT, OBU and the HydroAcoustic Modem (HAM.node) which is used by both systems, along with first results from instrument deployments off Indonesia.

  15. Languages for parallel architectures

    SciTech Connect

    Bakker, J.W.

    1989-01-01

    This book presents mathematical methods for modelling parallel computer architectures, based on the results of ESPRIT's project 415 on computer languages for parallel architectures. Presented are investigations incorporating a wide variety of programming styles, including functional,logic, and object-oriented paradigms. Topics cover include Philips's parallel object-oriented language POOL, lazy-functional languages, the languages IDEAL, K-LEAF, FP2, and Petri-net semantics for the AADL language.

  16. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  17. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  18. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  19. 78 FR 19007 - Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... COMMISSION Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof.... 1337, on behalf of Lamina Packaging Innovations LLC of Longview, Texas. An amended complaint was filed... importation of certain products having laminated packaging, laminated packaging, and components thereof...

  20. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  1. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  2. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  3. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  4. Plutonium stabilization and packaging system

    SciTech Connect

    1996-05-01

    This document describes the functional design of the Plutonium Stabilization and Packaging System (Pu SPS). The objective of this system is to stabilize and package plutonium metals and oxides of greater than 50% wt, as well as other selected isotopes, in accordance with the requirements of the DOE standard for safe storage of these materials for 50 years. This system will support completion of stabilization and packaging campaigns of the inventory at a number of affected sites before the year 2002. The package will be standard for all sites and will provide a minimum of two uncontaminated, organics free confinement barriers for the packaged material.

  5. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  6. ISSUES ASSOCIATED WITH SAFE PACKAGING AND TRANSPORT OF NANOPARTICLES

    SciTech Connect

    Gupta, N.; Smith, A.

    2011-02-14

    Nanoparticles have long been recognized a hazardous substances by personnel working in the field. They are not, however, listed as a separate, distinct category of dangerous goods at present. As dangerous goods or hazardous substances, they require packaging and transportation practices which parallel the established practices for hazardous materials transport. Pending establishment of a distinct category for such materials by the Department of Transportation, existing consensus or industrial protocols must be followed. Action by DOT to establish appropriate packaging and transport requirements is recommended.

  7. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  8. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  9. Parallels in History.

    ERIC Educational Resources Information Center

    Mugleston, William F.

    2000-01-01

    Believes that by focusing on the recurrent situations and problems, or parallels, throughout history, students will understand the relevance of history to their own times and lives. Provides suggestions for parallels in history that may be introduced within lectures or as a means to class discussions. (CMK)

  10. Parallel Analog-to-Digital Image Processor

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.

    1987-01-01

    Proposed integrated-circuit network of many identical units convert analog outputs of imaging arrays of x-ray or infrared detectors to digital outputs. Converter located near imaging detectors, within cryogenic detector package. Because converter output digital, lends itself well to multiplexing and to postprocessing for correction of gain and offset errors peculiar to each picture element and its sampling and conversion circuits. Analog-to-digital image processor is massively parallel system for processing data from array of photodetectors. System built as compact integrated circuit located near local plane. Buffer amplifier for each picture element has different offset.

  11. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  12. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  13. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  14. Laboratory Measurements of Synthetic Pyroxenes and their Mixtures with Iron Sulfides as Inorganic Refractory Analogues for Rosetta/VIRTIS' Surface Composition Analysis of 67P/CG

    NASA Astrophysics Data System (ADS)

    Markus, Kathrin; Arnold, Gabriele; Moroz, Ljuba; Henckel, Daniela; Kappel, David; Capaccioni, Fabrizio; Filacchione, Gianrico; Schmitt, Bernard; Tosi, Federico; Érard, Stéphane; Bockelee-Morvan, Dominique; Leyrat, Cedric; VIRTIS Team

    2016-10-01

    The Visible and InfraRed Thermal Imaging Spectrometer VIRTIS on board Rosetta provided 0.25-5.1 µm spectra of 67P/CG's surface (Capaccioni et al., 2015). Thermally corrected reflectance spectra display a low albedo of 0.06 at 0.65 µm, different red VIS and IR spectral slopes, and a broad 3.2 µm band. This absorption feature is due to refractory surface constituents attributed to organic components, but other refractory constituents influence albedo and spectral slopes. Possible contributions of inorganic components to spectral characteristics and spectral variations across the surface should be understood based on laboratory studies and spectral modeling. Although a wide range of silicate compositions was found in "cometary" anhydrous IDPs and cometary dust, Mg-rich crystalline mafic minerals are dominant silicate components. A large fraction of silicate grains are Fe-free enstatites and forsterites that are not found in terrestrial rocks but can be synthesized in order to provide a basis for laboratory studies and comparison with VIRTIS data. We report the results of the synthesis, analyses, and spectral reflectance measurements of Fe-free low-Ca pyroxenes (ortho- and clinoenstatites). These minerals are generally very bright and almost spectrally featureless. However, even trace amounts of Fe-ions produce a significant decrease in the near-UV reflectance and hence can contribute to slope variations. Iron sulfides (troilite, pyrrhotite) are among the most plausible phases responsible for the low reflectance of 67P's surface from the VIS to the NIR. The darkening efficiency of these opaque phases is strongly particle-size dependent. Here we present a series of reflectance spectra of fine-grained synthetic enstatite powders mixed in various proportions with iron sulfide powders. The influence of dark sulfides on reflectance in the near-UV to near-IR spectral ranges is investigated. This study can contribute to understand the shape of reflectance spectra of 67P

  15. Distribution of H2O and CO2 in the inner coma of 67P/CG as observed by VIRTIS-M onboard Rosetta

    NASA Astrophysics Data System (ADS)

    Capaccioni, F.

    2015-10-01

    VIRTIS (Visible, Infrared and Thermal Imaging Spectrometers) is a dual channel spectrometer; VIRTIS-M (M for Mapper) is a hyper-spectral imager covering a wide spectral range with two detectors: a CCD (VIS) ranging from 0.25 through 1.0 μm and an HgCdTe detector (IR) covering the 1.0 through 5.1 μm region. VIRTIS-M uses a slit and a scan mirror to generate images with spatial resolution of 250 μrad over a FOV of 64 mrad. The second channel is VIRTIS-H (H for High resolution), a point spectrometer with high spectral resolution (λ/Δλ=3000@3 μm) in the range 2-5 μm [1].The VIRTIS instrument has been used to investigate the molecular composition of the coma of 67P/CG by observing resonant fluorescent excitation in the 2 to 5 μm spectral region. The spectrum consists of emission bands superimposed on a background continuum. The strongest features are the bands of H2O at 2.7 μm and the CO2 band at 4.27 μm [1]. The high spectral resolution of VIRTIS-H obtains a detailed description of the fluorescent bands, while the mapping capability of VIRTIS-M extends the coverage in the spatial dimension to map and monitor the abundance of water and carbon dioxide in space and time. We have already reported [2,3,4] some preliminary observations by VIRTIS of H2O and CO2 in the coma. In the present work we perform a systematic mapping of the distribution and variability of these molecules using VIRTIS-M measurements of their band areas. All the spectra were carefully selected to avoid contamination due to nucleus radiance. A median filter is applied on the spatial dimensions of each data cube to minimise the pixel-to-pixel residual variability. This is at the expense of some reduction in the spatial resolution, which is still in the order of few tens of metres and thus adequate for the study of the spatial distribution of the volatiles. Typical spectra are shown in Figure 1

  16. Packaging - Materials review

    SciTech Connect

    Herrmann, Matthias

    2014-06-16

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  17. Packaging - Materials review

    NASA Astrophysics Data System (ADS)

    Herrmann, Matthias

    2014-06-01

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  18. Components of Adenovirus Genome Packaging

    PubMed Central

    Ahi, Yadvinder S.; Mittal, Suresh K.

    2016-01-01

    Adenoviruses (AdVs) are icosahedral viruses with double-stranded DNA (dsDNA) genomes. Genome packaging in AdV is thought to be similar to that seen in dsDNA containing icosahedral bacteriophages and herpesviruses. Specific recognition of the AdV genome is mediated by a packaging domain located close to the left end of the viral genome and is mediated by the viral packaging machinery. Our understanding of the role of various components of the viral packaging machinery in AdV genome packaging has greatly advanced in recent years. Characterization of empty capsids assembled in the absence of one or more components involved in packaging, identification of the unique vertex, and demonstration of the role of IVa2, the putative packaging ATPase, in genome packaging have provided compelling evidence that AdVs follow a sequential assembly pathway. This review provides a detailed discussion on the functions of the various viral and cellular factors involved in AdV genome packaging. We conclude by briefly discussing the roles of the empty capsids, assembly intermediates, scaffolding proteins, portal vertex and DNA encapsidating enzymes in AdV assembly and packaging. PMID:27721809

  19. New package for CMOS sensors

    NASA Astrophysics Data System (ADS)

    Diot, Jean-Luc; Loo, Kum Weng; Moscicki, Jean-Pierre; Ng, Hun Shen; Tee, Tong Yan; Teysseyre, Jerome; Yap, Daniel

    2004-02-01

    Cost is the main drawback of existing packages for C-MOS sensors (mainly CLCC family). Alternative packages are thus developed world-wide. And in particular, S.T.Microelectronics has studied a low cost alternative packages based on QFN structure, still with a cavity. Intensive work was done to optimize the over-molding operation forming the cavity onto a metallic lead-frame (metallic lead-frame is a low cost substrate allowing very good mechanical definition of the final package). Material selection (thermo-set resin and glue for glass sealing) was done through standard reliability tests for cavity packages (Moisture Sensitivity Level 3 followed by temperature cycling, humidity storage and high temperature storage). As this package concept is new (without leads protruding the molded cavity), the effect of variation of package dimensions, as well as board lay-out design, are simulated on package life time (during temperature cycling, thermal mismatch between board and package leads to thermal fatigue of solder joints). These simulations are correlated with an experimental temperature cycling test with daisy-chain packages.

  20. Aristos Optimization Package

    SciTech Connect

    Ridzal, Danis

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.

  1. TASC Graphics Software Package.

    DTIC Science & Technology

    1982-12-01

    RD-I55 861 TSC GRPHICS SOFTWRE PCKRGE(U) NLYTIC SCIENCES i/I RD 𔄀-t CORP RERDING MA M R TANG DEC 82 TR-1946-6U~~cLss AFG L-TR-gi-1388 Fi9629-89-C...extensions were made to allow TGSP to use color graphics. 2.1 INTERACTIVE TGSP NCAR was designed to be a general plot package for use with many different...plotting devices. It is designed to accept high level commands and generate an intermediate set of commands called metacode and to then use device

  2. Safety Analysis Report for packaging (onsite) steel waste package

    SciTech Connect

    BOEHNKE, W.M.

    2000-07-13

    The steel waste package is used primarily for the shipment of remote-handled radioactive waste from the 324 Building to the 200 Area for interim storage. The steel waste package is authorized for shipment of transuranic isotopes. The maximum allowable radioactive material that is authorized is 500,000 Ci. This exceeds the highway route controlled quantity (3,000 A{sub 2}s) and is a type B packaging.

  3. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2013-07-31

    This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

  4. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  5. Japan's electronic packaging technologies

    NASA Technical Reports Server (NTRS)

    Tummala, Rao R.; Pecht, Michael

    1995-01-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  6. The Ettention software package.

    PubMed

    Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp

    2016-02-01

    We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Japan's electronic packaging technologies

    NASA Astrophysics Data System (ADS)

    Tummala, Rao R.; Pecht, Michael

    1995-02-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  8. Signal processor packaging design

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Phipps, Mickie A.

    1993-10-01

    The Signal Processor Packaging Design (SPPD) program was a technology development effort to demonstrate that a miniaturized, high throughput programmable processor could be fabricated to meet the stringent environment imposed by high speed kinetic energy guided interceptor and missile applications. This successful program culminated with the delivery of two very small processors, each about the size of a large pin grid array package. Rockwell International's Tactical Systems Division in Anaheim, California developed one of the processors, and the other was developed by Texas Instruments' (TI) Defense Systems and Electronics Group (DSEG) of Dallas, Texas. The SPPD program was sponsored by the Guided Interceptor Technology Branch of the Air Force Wright Laboratory's Armament Directorate (WL/MNSI) at Eglin AFB, Florida and funded by SDIO's Interceptor Technology Directorate (SDIO/TNC). These prototype processors were subjected to rigorous tests of their image processing capabilities, and both successfully demonstrated the ability to process 128 X 128 infrared images at a frame rate of over 100 Hz.

  9. Japan's electronic packaging technologies

    NASA Technical Reports Server (NTRS)

    Tummala, Rao R.; Pecht, Michael

    1995-01-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  10. Space station power semiconductor package

    NASA Technical Reports Server (NTRS)

    Balodis, Vilnis; Berman, Albert; Devance, Darrell; Ludlow, Gerry; Wagner, Lee

    1987-01-01

    A package of high-power switching semiconductors for the space station have been designed and fabricated. The package includes a high-voltage (600 volts) high current (50 amps) NPN Fast Switching Power Transistor and a high-voltage (1200 volts), high-current (50 amps) Fast Recovery Diode. The package features an isolated collector for the transistors and an isolated anode for the diode. Beryllia is used as the isolation material resulting in a thermal resistance for both devices of .2 degrees per watt. Additional features include a hermetical seal for long life -- greater than 10 years in a space environment. Also, the package design resulted in a low electrical energy loss with the reduction of eddy currents, stray inductances, circuit inductance, and capacitance. The required package design and device parameters have been achieved. Test results for the transistor and diode utilizing the space station package is given.

  11. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  12. IN-PACKAGE CHEMISTRY ABSTRACTION

    SciTech Connect

    E. Thomas

    2005-07-14

    This report was developed in accordance with the requirements in ''Technical Work Plan for Postclosure Waste Form Modeling'' (BSC 2005 [DIRS 173246]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as a function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model, which uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model, which is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials, and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed (CDSP) waste packages containing high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor diffusing into the waste package, and (2) seepage water entering the waste package as a liquid from the drift. (1) Vapor-Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H{sub 2}O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Liquid-Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package.

  13. Packaging investigation of optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Zhike, Zhang; Yu, Liu; Jianguo, Liu; Ninghua, Zhu

    2015-10-01

    Compared with microelectronic packaging, optoelectronic packaging as a new packaging type has been developed rapidly and it will play an essential role in optical communication. In this paper, we try to summarize the development history, research status, technology issues and future prospects, and hope to provide a meaningful reference. Project supported by the National High Technology Research and Development Program of China (Nos. 2013AA014201, 2013AA014203) and the National Natural Science Foundation of China (Nos. 61177080, 61335004, 61275031).

  14. Hazardous materials package performance regulations

    SciTech Connect

    Russell, N. A.; Glass, R. E.; McClure, J. D.; Finley, N. C.

    1991-01-01

    This paper discusses a hazardous materials Hazmat Packaging Performance Evaluation (HPPE) project being conducted at Sandia National Laboratories for the US Department of Transportation Research Special Programs Administration (DOT-RSPA) to look at the subset of bulk packagings that are larger than 2000 gallons. The objectives of this project are to evaluate current hazmat specification packagings and develop supporting documentation for determining performance requirements for packagings in excess of 2000 gallons that transport hazardous materials that have been classified as extremely toxic by inhalation (METBI).

  15. Naval Waste Package Design Report

    SciTech Connect

    M.M. Lewis

    2004-03-15

    A design methodology for the waste packages and ancillary components, viz., the emplacement pallets and drip shields, has been developed to provide designs that satisfy the safety and operational requirements of the Yucca Mountain Project. This methodology is described in the ''Waste Package Design Methodology Report'' Mecham 2004 [DIRS 166168]. To demonstrate the practicability of this design methodology, four waste package design configurations have been selected to illustrate the application of the methodology. These four design configurations are the 21-pressurized water reactor (PWR) Absorber Plate waste package, the 44-boiling water reactor (BWR) waste package, the 5-defense high-level waste (DHLW)/United States (U.S.) Department of Energy (DOE) spent nuclear fuel (SNF) Co-disposal Short waste package, and the Naval Canistered SNF Long waste package. Also included in this demonstration is the emplacement pallet and continuous drip shield. The purpose of this report is to document how that design methodology has been applied to the waste package design configurations intended to accommodate naval canistered SNF. This demonstrates that the design methodology can be applied successfully to this waste package design configuration and support the License Application for construction of the repository.

  16. About the ZOOM minimization package

    SciTech Connect

    Fischler, M.; Sachs, D.; /Fermilab

    2004-11-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.

  17. Performance characteristics of a cosmology package on leading HPCarchitectures

    SciTech Connect

    Carter, Jonathan; Borrill, Julian; Oliker, Leonid

    2004-01-01

    The Cosmic Microwave Background (CMB) is a snapshot of the Universe some 400,000 years after the Big Bang. The pattern of anisotropies in the CMB carries a wealth of information about the fundamental parameters of cosmology. Extracting this information is an extremely computationally expensive endeavor, requiring massively parallel computers and software packages capable of exploiting them. One such package is the Microwave Anisotropy Dataset Computational Analysis Package (MADCAP) which has been used to analyze data from a number of CMB experiments. In this work, we compare MADCAP performance on the vector-based Earth Simulator (ES) and Cray X1 architectures and two leading superscalar systems, the IBM Power3 and Power4. Our results highlight the complex interplay between the problem size, architectural paradigm, interconnect, and vendor-supplied numerical libraries, while isolating the I/O file system as the key bottleneck across all the platforms.

  18. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  19. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  20. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  1. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  2. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  3. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  4. Revisiting and parallelizing SHAKE

    NASA Astrophysics Data System (ADS)

    Weinbach, Yael; Elber, Ron

    2005-10-01

    An algorithm is presented for running SHAKE in parallel. SHAKE is a widely used approach to compute molecular dynamics trajectories with constraints. An essential step in SHAKE is the solution of a sparse linear problem of the type Ax = b, where x is a vector of unknowns. Conjugate gradient minimization (that can be done in parallel) replaces the widely used iteration process that is inherently serial. Numerical examples present good load balancing and are limited only by communication time.

  5. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  6. The Packaging Handbook -- A guide to package design

    SciTech Connect

    Shappert, L.B.

    1995-12-31

    The Packaging Handbook is a compilation of 14 technical chapters and five appendices that address the life cycle of a packaging which is intended to transport radioactive material by any transport mode in normal commerce. Although many topics are discussed in depth, this document focuses on the design aspects of a packaging. The Handbook, which is being prepared under the direction of the US Department of Energy, is intended to provide a wealth of technical guidance that will give designers a better understanding of the regulatory approval process, preferences of regulators in specific aspects of packaging design, and the types of analyses that should be seriously considered when developing the packaging design. Even though the Handbook is concerned with all packagings, most of the emphasis is placed on large packagings that are capable of transporting large radioactive sources that are also fissile (e.g., spent fuel). These are the types of packagings that must address the widest range of technical topics in order to meet domestic and international regulations. Most of the chapters in the Handbook have been drafted and submitted to the Oak Ridge National Laboratory for editing; the majority of these have been edited. This report summarizes the contents.

  7. Anhydrous Ammonia Training Module. Trainer's Package. Participant's Package.

    ERIC Educational Resources Information Center

    Beaudin, Bart; And Others

    This document contains a trainer's and a participant's package for teaching employees on site safe handling procedures for working with anhydrous ammonia, especially on farms. The trainer's package includes the following: a description of the module; a competency; objectives; suggested instructional aids; a training outline (or lesson plan) for…

  8. Package Up Your Troubles--An Introduction to Package Libraries

    ERIC Educational Resources Information Center

    Frank, Colin

    1978-01-01

    Discusses a "package deal" library--a prefabricated building including interior furnishing--in terms of costs, fitness for purpose, and interior design, i.e., shelving, flooring, heating, lighting, and humidity. Advantages and disadvantages of the package library are also considered. (Author/MBR)

  9. Package Up Your Troubles--An Introduction to Package Libraries

    ERIC Educational Resources Information Center

    Frank, Colin

    1978-01-01

    Discusses a "package deal" library--a prefabricated building including interior furnishing--in terms of costs, fitness for purpose, and interior design, i.e., shelving, flooring, heating, lighting, and humidity. Advantages and disadvantages of the package library are also considered. (Author/MBR)

  10. Tritium waste package

    DOEpatents

    Rossmassler, R.; Ciebiera, L.; Tulipano, F.J.; Vinson, S.; Walters, R.T.

    1995-11-07

    A containment and waste package system for processing and shipping tritium oxide waste received from a process gas includes an outer drum and an inner drum containing a disposable molecular sieve bed (DMSB) seated within the outer drum. The DMSB includes an inlet diffuser assembly, an outlet diffuser assembly, and a hydrogen catalytic recombiner. The DMSB absorbs tritium oxide from the process gas and converts it to a solid form so that the tritium is contained during shipment to a disposal site. The DMSB is filled with type 4A molecular sieve pellets capable of adsorbing up to 1000 curies of tritium. The recombiner contains a sufficient amount of catalyst to cause any hydrogen and oxygen present in the process gas to recombine to form water vapor, which is then adsorbed onto the DMSB. 1 fig.

  11. Tritium waste package

    DOEpatents

    Rossmassler, Rich; Ciebiera, Lloyd; Tulipano, Francis J.; Vinson, Sylvester; Walters, R. Thomas

    1995-01-01

    A containment and waste package system for processing and shipping tritium xide waste received from a process gas includes an outer drum and an inner drum containing a disposable molecular sieve bed (DMSB) seated within outer drum. The DMSB includes an inlet diffuser assembly, an outlet diffuser assembly, and a hydrogen catalytic recombiner. The DMSB absorbs tritium oxide from the process gas and converts it to a solid form so that the tritium is contained during shipment to a disposal site. The DMSB is filled with type 4A molecular sieve pellets capable of adsorbing up to 1000 curies of tritium. The recombiner contains a sufficient amount of catalyst to cause any hydrogen add oxygen present in the process gas to recombine to form water vapor, which is then adsorbed onto the DMSB.

  12. Balloon gondola diagnostics package

    NASA Technical Reports Server (NTRS)

    Cantor, K. M.

    1986-01-01

    In order to define a new gondola structural specification and to quantify the balloon termination environment, NASA developed a balloon gondola diagnostics package (GDP). This addition to the balloon flight train is comprised of a large array of electronic sensors employed to define the forces and accelerations imposed on a gondola during the termination event. These sensors include the following: a load cell, a three-axis accelerometer, two three-axis rate gyros, two magnetometers, and a two axis inclinometer. A transceiver couple allows the data to be telemetered across any in-line rotator to the gondola-mounted memory system. The GDP is commanded 'ON' just prior to parachute deployment in order to record the entire event.

  13. The LISA Technology Package

    NASA Technical Reports Server (NTRS)

    Livas, Jeff

    2009-01-01

    The LISA Technology Package (LTP) is the payload of the European Space Agency's LISA Pathfinder mission. LISA Pathfinder was instigated to test, in a flight environment, the critical technologies required by LISA; namely, the inertial sensing subsystem and associated control laws and micro-Newton thrusters required to place a macroscopic test mass in pure free-fall. The UP is in the late stages of development -- all subsystems are currently either in the final stages of manufacture or in test. Available flight units are being integrated into the real-time testbeds for system verification tests. This poster will describe the UP and its subsystems, give the current status of the hardware and test campaign, and outline the future milestones leading to the UP delivery.

  14. Balloon gondola diagnostics package

    NASA Astrophysics Data System (ADS)

    Cantor, K. M.

    1986-10-01

    In order to define a new gondola structural specification and to quantify the balloon termination environment, NASA developed a balloon gondola diagnostics package (GDP). This addition to the balloon flight train is comprised of a large array of electronic sensors employed to define the forces and accelerations imposed on a gondola during the termination event. These sensors include the following: a load cell, a three-axis accelerometer, two three-axis rate gyros, two magnetometers, and a two axis inclinometer. A transceiver couple allows the data to be telemetered across any in-line rotator to the gondola-mounted memory system. The GDP is commanded 'ON' just prior to parachute deployment in order to record the entire event.

  15. Piecewise Cubic Interpolation Package

    SciTech Connect

    Fritsch, F. N.; LLNL,

    1982-04-23

    PCHIP (Piecewise Cubic Interpolation Package) is a set of subroutines for piecewise cubic Hermite interpolation of data. It features software to produce a monotone and "visually pleasing" interpolant to monotone data. Such an interpolant may be more reasonable than a cubic spline if the data contain both 'steep' and 'flat' sections. Interpolation of cumulative probability distribution functions is another application. In PCHIP, all piecewise cubic functions are represented in cubic Hermite form; that is, f(x) is determined by its values f(i) and derivatives d(i) at the breakpoints x(i), i=1(1)N. PCHIP contains three routines - PCHIM, PCHIC, and PCHSP to determine derivative values, six routines - CHFEV, PCHFE, CHFDV, PCHFD, PCHID, and PCHIA to evaluate, differentiate, or integrate the resulting cubic Hermite function, and one routine to check for monotonicity. A FORTRAN 77 version and SLATEC version of PCHIP are included.

  16. Romanian experience on packaging testing

    SciTech Connect

    Vieru, G.

    2007-07-01

    With more than twenty years ago, the Institute for Nuclear Research Pitesti (INR), through its Reliability and Testing Laboratory, was licensed by the Romanian Nuclear Regulatory Body- CNCAN and to carry out qualification tests [1] for packages intended to be used for the transport and storage of radioactive materials. Radioactive materials, generated by Romanian nuclear facilities [2] are packaged in accordance with national [3] and the IAEA's Regulations [1,6] for a safe transport to the disposal center. Subjecting these packages to the normal and simulating test conditions accomplish the evaluation and certification in order to prove the package technical performances. The paper describes the qualification tests for type A and B packages used for transport and storage of radioactive materials, during a period of 20 years of experience. Testing is used to substantiate assumption in analytical models and to demonstrate package structural response. The Romanian test facilities [1,3,6] are used to simulate the required qualification tests and have been developed at INR Pitesti, the main supplier of type A packages used for transport and storage of low radioactive wastes in Romania. The testing programme will continue to be a strong option to support future package development, to perform a broad range of verification and certification tests on radioactive material packages or component sections, such as packages used for transport of radioactive sources to be used for industrial or medical purposes [2,8]. The paper describes and contain illustrations showing some of the various tests packages which have been performed during certain periods and how they relate to normal conditions and minor mishaps during transport. Quality assurance and quality controls measures taken in order to meet technical specification provided by the design there are also presented and commented. (authors)

  17. Electro-Microfluidic Packaging

    SciTech Connect

    BENAVIDES, GILBERT L.; GALAMBOS, PAUL C.

    2002-06-01

    Electro-microfluidics is experiencing explosive growth in new product developments. There are many commercial applications for electro-microfluidic devices such as chemical sensors, biological sensors, and drop ejectors for both printing and chemical analysis. The number of silicon surface micromachined electro-microfluidic products is likely to increase. Manufacturing efficiency and integration of microfluidics with electronics will become important. Surface micromachined microfluidic devices are manufactured with the same tools as IC's (integrated circuits) and their fabrication can be incorporated into the IC fabrication process. In order to realize applications for devices must be developed. An Electro-Microfluidic Dual In-line Package (EMDIP{trademark}) was developed surface micromachined electro-microfluidic devices, a practical method for getting fluid into these to be a standard solution that allows for both the electrical and the fluidic connections needed to operate a great variety of electro-microfluidic devices. The EMDIP{trademark} includes a fan-out manifold that, on one side, mates directly with the 200 micron diameter Bosch etched holes found on the device, and, on the other side, mates to lager 1 mm diameter holes. To minimize cost the EMDIP{trademark} can be injection molded in a great variety of thermoplastics which also serve to optimize fluid compatibility. The EMDIP{trademark} plugs directly into a fluidic printed wiring board using a standard dual in-line package pattern for the electrical connections and having a grid of multiple 1 mm diameter fluidic connections to mate to the underside of the EMDIP{trademark}.

  18. Chip packaging technique

    NASA Technical Reports Server (NTRS)

    Jayaraj, Kumaraswamy (Inventor); Noll, Thomas E. (Inventor); Lockwood, Harry F. (Inventor)

    2001-01-01

    A hermetically sealed package for at least one semiconductor chip is provided which is formed of a substrate having electrical interconnects thereon to which the semiconductor chips are selectively bonded, and a lid which preferably functions as a heat sink, with a hermetic seal being formed around the chips between the substrate and the heat sink. The substrate is either formed of or includes a layer of a thermoplastic material having low moisture permeability which material is preferably a liquid crystal polymer (LCP) and is a multiaxially oriented LCP material for preferred embodiments. Where the lid is a heat sink, the heat sink is formed of a material having high thermal conductivity and preferably a coefficient of thermal expansion which substantially matches that of the chip. A hermetic bond is formed between the side of each chip opposite that connected to the substrate and the heat sink. The thermal bond between the substrate and the lid/heat sink may be a pinched seal or may be provided, for example by an LCP frame which is hermetically bonded or sealed on one side to the substrate and on the other side to the lid/heat sink. The chips may operate in the RF or microwave bands with suitable interconnects on the substrate and the chips may also include optical components with optical fibers being sealed into the substrate and aligned with corresponding optical components to transmit light in at least one direction. A plurality of packages may be physically and electrically connected together in a stack to form a 3D array.

  19. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  20. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  1. Solar water heater design package

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Package describes commercial domestic-hot-water heater with roof or rack mounted solar collectors. System is adjustable to pre-existing gas or electric hot-water house units. Design package includes drawings, description of automatic control logic, evaluation measurements, possible design variations, list of materials and installation tools, and trouble-shooting guide and manual.

  2. Oral Hygiene. Learning Activity Package.

    ERIC Educational Resources Information Center

    Hime, Kirsten

    This learning activity package on oral hygiene is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, a list of materials needed, a list of definitions, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics…

  3. Floriculture. Selected Learning Activity Packages.

    ERIC Educational Resources Information Center

    Clemson Univ., SC. Vocational Education Media Center.

    This series of learning activity packages is based on a catalog of performance objectives, criterion-referenced measures, and performance guides for gardening/groundskeeping developed by the Vocational Education Consortium of States (V-TECS). Learning activity packages are presented in four areas: (1) preparation of soils and planting media, (2)…

  4. Packaging perspective, 1910-1985

    Treesearch

    John W. Koning; James F. Laundrie

    1985-01-01

    For 75 years the Forest Products Laboratory has been concerned for the wise use of wood. One of the major uses of wood is packaging. This report summarizes the research reports completed in packaging and relates the output in terms of forest management and return on the taxpayer’s investment.

  5. Microelectronics/electronic packaging potential

    NASA Technical Reports Server (NTRS)

    Sandeau, R. F.

    1977-01-01

    The trend toward smaller and lighter electronic packages was examined. It is suggested that electronic packaging engineers and microelectronic designers closely associate and give full attention to optimization of both disciplines on all product lines. Extensive research and development work underway to explore innovative ideas and make new inroads into the technology base, is expected to satisfy the demands of the 1980's.

  6. Status of PERST-5 package

    SciTech Connect

    Gomin, E. A.; Gurevich, M. I.; Kalugin, M. A.; Lazarenko, A. P.; Pryanichnikov, A. V. Sidorenko, V. D.; Druzhinin, V. E.; Zhirnov, A. P.; Rozhdestvenskiy, I. M.

    2012-12-15

    The methods and algorithms used in the PERST-5 package are described. This package is part of the MCU-5 code and is intended for neutron-physical calculation of the cells and parts of nuclear reactors using a generalized method of first collision probabilities.

  7. Chemical Energy: A Learning Package.

    ERIC Educational Resources Information Center

    Cohen, Ita; Ben-Zvi, Ruth

    1982-01-01

    A comprehensive teaching/learning chemical energy package was developed to overcome conceptual/experimental difficulties and time required for calculation of enthalpy changes. The package consists of five types of activities occuring in repeated cycles: group activities, laboratory experiments, inquiry questionnaires, teacher-led class…

  8. Solar water heater design package

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Package describes commercial domestic-hot-water heater with roof or rack mounted solar collectors. System is adjustable to pre-existing gas or electric hot-water house units. Design package includes drawings, description of automatic control logic, evaluation measurements, possible design variations, list of materials and installation tools, and trouble-shooting guide and manual.

  9. Individualized Learning Package about Etching.

    ERIC Educational Resources Information Center

    Sauer, Michael J.

    An individualized learning package provides step-by-step instruction in the fundamentals of the etching process. Thirteen specific behavioral objectives are listed. A pretest, consisting of matching 15 etching terms with their definitions, is provided along with an answer key. The remainder of the learning package teaches the 13 steps of the…

  10. Chemical Energy: A Learning Package.

    ERIC Educational Resources Information Center

    Cohen, Ita; Ben-Zvi, Ruth

    1982-01-01

    A comprehensive teaching/learning chemical energy package was developed to overcome conceptual/experimental difficulties and time required for calculation of enthalpy changes. The package consists of five types of activities occuring in repeated cycles: group activities, laboratory experiments, inquiry questionnaires, teacher-led class…

  11. The Macro - TIPS Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    The TIPS (Teaching Information Processing System) Course Package was designed to be used with the Macro-Games Course Package (SO 011 930) in order to train college students to apply the tools of economic analysis to current problems. TIPS is used to provide feedback and individualized assignments to students, as well as information about the…

  12. Negotiating a fair compensation package.

    PubMed

    Snyder, Thomas L

    2005-01-01

    At the end of the day, compensation packages must be fair for both you and your employer. Employers should conduct an economic analysis to determine what they can afford to offer and calculate the economic return that they should rightfully receive. Understanding the employer's side of the equation is equally important in developing a win/win compensation package for yourself.

  13. 19 CFR 191.13 - Packaging materials.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Packaging materials. 191.13 Section 191.13 Customs... (CONTINUED) DRAWBACK General Provisions § 191.13 Packaging materials. (a) Imported packaging material... packaging material when used to package or repackage merchandise or articles exported or destroyed...

  14. 19 CFR 191.13 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false Packaging materials. 191.13 Section 191.13 Customs... (CONTINUED) DRAWBACK General Provisions § 191.13 Packaging materials. (a) Imported packaging material... packaging material when used to package or repackage merchandise or articles exported or destroyed...

  15. Nanocomposite Sensors for Food Packaging

    NASA Astrophysics Data System (ADS)

    Avella, Maurizio; Errico, Maria Emanuela; Gentile, Gennaro; Volpe, Maria Grazia

    Nowadays nanotechnologies applied to the food packaging sector find always more applications due to a wide range of benefits that they can offer, such as improved barrier properties, improved mechanical performance, antimicrobial properties and so on. Recently many researches are addressed to the set up of new food packaging materials, in which polymer nanocomposites incorporate nanosensors, developing the so-called "smart" packaging. Some examples of nanocomposite sensors specifically realised for the food packaging industry are reported. The second part of this work deals with the preparation and characterisation of two new polymer-based nanocomposite systems that can be used as food packaging materials. Particularly the results concerning the following systems are illustrated: isotactic polypropylene (iPP) filled with CaCO3 nanoparticles and polycaprolactone (PCL) filled with SiO2 nanoparticles.

  16. CDIAC catalog of numeric data packages and computer model packages

    SciTech Connect

    Boden, T.A.; O`Hara, F.M. Jr.; Stoss, F.W.

    1993-05-01

    The Carbon Dioxide Information Analysis Center acquires, quality-assures, and distributes to the scientific community numeric data packages (NDPs) and computer model packages (CMPs) dealing with topics related to atmospheric trace-gas concentrations and global climate change. These packages include data on historic and present atmospheric CO{sub 2} and CH{sub 4} concentrations, historic and present oceanic CO{sub 2} concentrations, historic weather and climate around the world, sea-level rise, storm occurrences, volcanic dust in the atmosphere, sources of atmospheric CO{sub 2}, plants` response to elevated CO{sub 2} levels, sunspot occurrences, and many other indicators of, contributors to, or components of climate change. This catalog describes the packages presently offered by CDIAC, reviews the processes used by CDIAC to assure the quality of the data contained in these packages, notes the media on which each package is available, describes the documentation that accompanies each package, and provides ordering information. Numeric data are available in the printed NDPs and CMPs, in CD-ROM format, and from an anonymous FTP area via Internet. All CDIAC information products are available at no cost.

  17. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  18. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  19. Parallel Analysis Tools for Ultra-Large Climate Data Sets

    NASA Astrophysics Data System (ADS)

    Jacob, Robert; Krishna, Jayesh; Xu, Xiabing; Mickelson, Sheri; Wilde, Mike; Peterson, Kara; Bochev, Pavel; Latham, Robert; Tautges, Tim; Brown, David; Brownrigg, Richard; Haley, Mary; Shea, Dennis; Huang, Wei; Middleton, Don; Schuchardt, Karen; Yin, Jian

    2013-04-01

    While climate models have used parallelism for several years, the post-processing tools are still mostly single-threaded applications and many are closed source. These tools are becoming a bottleneck in the production of new climate knowledge when they confront terabyte-sized output from high-resolution climate models. The ParVis project is using and creating Free and Open Source tools that bring data and task parallelism to climate model analysis to enable analysis of large climate data sets. ParVis is using the Swift task-parallel language to implement a diagnostic suite that generates over 600 plots of atmospheric quantities. ParVis has also created a Parallel Gridded Analysis Library (ParGAL) which implements many common climate analysis operations in a data-parallel fashion using the Message Passing Interface. ParGAL has in turn been built on sophisticated packages for describing grids in parallel (the Mesh Oriented database (MOAB), performing vector operations on arbitrary grids (Intrepid) and reading data in parallel (PnetCDF). ParGAL is being used to implement a parallel version of the NCAR Command Language (NCL) called ParNCL. ParNCL/ParCAL not only speeds up analysis of large datasets but also allows operations to be performed on native grids, eliminating the need to transform data to latitude-longitude grids. All of the tools ParVis is creating are available as free and open source software.

  20. Evaluation of RDBMS packages for use in astronomy

    NASA Technical Reports Server (NTRS)

    Page, C. G.; Davenhall, A. C.

    1992-01-01

    Tabular data sets arise in many areas of astronomical data analysis, from raw data (such as photon event lists) to final results (such as source catalogs). The Starlink catalog access and reporting package, SCAR, was originally developed to handle IRAS data and it has been the principal relational DBMS in the Starlink software collection for several years. But SCAR has many limitations and is VMS-specific, while Starlink is in transition from VMS to Unix. Rather than attempt a major re-write of SCAR for Unix, it seemed more sensible to see whether any existing database packages are suitable for general astronomical use. The authors first drew up a list of desirable properties for such a system and then used these criteria to evaluate a number of packages, both free ones and those commercially available. It is already clear that most commercial DBMS packages are not very well suited to the requirements; for example, most cannot carry out efficiently even fairly basic operations such as joining two catalogs on an approximate match of celestial positions. This paper reports the results of the evaluation exercise and notes the problems in using a standard DBMS package to process scientific data. In parallel with this the authors have started to develop a simple database engine that can handle tabular data in a range of common formats including simple direct-access files (such as SCAR and Exosat DBMS tables) and FITS tables (both ASCII and binary).

  1. User's Guide for ENSAERO_FE Parallel Finite Element Solver

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Guruswamy, Guru P.

    1999-01-01

    A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.

  2. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  3. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  4. Collisionless parallel shocks

    SciTech Connect

    Khabibrakhmanov, I.K. ); Galeev, A.A.; Galinsky, V.L. )

    1993-02-01

    A collisionless parallel shock model is presented which is based on solitary-type solutions of the modified derivative nonlinear Schrodinger equation (MDNLS) for parallel Alfven waves. We generalize the standard derivative nonlinear Schrodinger equation in order to include the possible anisotropy of the plasma distribution function and higher-order Korteweg-de Vies type dispersion. Stationary solutions of MDNLS are discussed. The new mechanism, which can be called [open quote]adiabatic[close quote] of ion reflection from the magnetic mirror of the parallel shock structure is the natural and essential feature of the parallel shock that introduces the irreversible properties into the nonlinear wave structure and may significantly contribute to the plasma heating upstream as well as downstream of the shock. The anisotropic nature of [open quotes]adiabatic[close quotes] reflections leads to the asymmetric particle distribution in the upstream as well in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves which can significantly contribute to the shock thermalization. The number of adiabaticaly reflected ions define the threshold conditions of the fire-hose and mirror type instabilities in the downstream and upstream regions and thus determine a parameter region in which the described laminar parallel shock structure can exist. The threshold conditions for the fire hose and mirror-type instabilities in the downstream and upstream regions of the shock are defined by the number of reflected particles and thus determine a parameter region in which the described laminar parallel shock structure can exist. 29 refs., 4 figs.

  5. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  6. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  7. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  8. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  9. 49 CFR 178.602 - Preparation of packagings and packages for testing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... testing at periodic intervals only (i.e., other than initial design qualification testing), at ambient... 49 Transportation 3 2014-10-01 2014-10-01 false Preparation of packagings and packages for testing...) SPECIFICATIONS FOR PACKAGINGS Testing of Non-bulk Packagings and Packages § 178.602 Preparation of packagings...

  10. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  11. Ion parallel closures

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Lee, Hankyu Q.; Held, Eric D.

    2017-02-01

    Ion parallel closures are obtained for arbitrary atomic weights and charge numbers. For arbitrary collisionality, the heat flow and viscosity are expressed as kernel-weighted integrals of the temperature and flow-velocity gradients. Simple, fitted kernel functions are obtained from the 1600 parallel moment solution and the asymptotic behavior in the collisionless limit. The fitted kernel parameters are tabulated for various temperature ratios of ions to electrons. The closures can be used conveniently without solving the kinetic equation or higher order moment equations in closing ion fluid equations.

  12. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  13. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  14. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  15. CRUNCH_PARALLEL

    SciTech Connect

    Shumaker, Dana E.; Steefel, Carl I.

    2016-06-21

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  16. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  17. In-Package Chemistry Abstraction

    SciTech Connect

    E. Thomas

    2004-11-09

    This report was developed in accordance with the requirements in ''Technical Work Plan for: Regulatory Integration Modeling and Analysis of the Waste Form and Waste Package'' (BSC 2004 [DIRS 171583]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model that uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model that is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed waste packages that contain both high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor that diffuses into the waste package, and (2) seepage water that enters the waste package from the drift as a liquid. (1) Vapor Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H2O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Water Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package. TSPA-LA uses the vapor influx case for the nominal scenario for simulations where the waste package has been

  18. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  19. Laser Welding in Electronic Packaging

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The laser has proven its worth in numerous high reliability electronic packaging applications ranging from medical to missile electronics. In particular, the pulsed YAG laser is an extremely flexible and versatile too] capable of hermetically sealing microelectronics packages containing sensitive components without damaging them. This paper presents an overview of details that must be considered for successful use of laser welding when addressing electronic package sealing. These include; metallurgical considerations such as alloy and plating selection, weld joint configuration, design of optics, use of protective gases and control of thermal distortions. The primary limitations on use of laser welding electronic for packaging applications are economic ones. The laser itself is a relatively costly device when compared to competing welding equipment. Further, the cost of consumables and repairs can be significant. These facts have relegated laser welding to use only where it presents a distinct quality or reliability advantages over other techniques of electronic package sealing. Because of the unique noncontact and low heat inputs characteristics of laser welding, it is an ideal candidate for sealing electronic packages containing MEMS devices (microelectromechanical systems). This paper addresses how the unique advantages of the pulsed YAG laser can be used to simplify MEMS packaging and deliver a product of improved quality.

  20. Laser Welding in Electronic Packaging

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The laser has proven its worth in numerous high reliability electronic packaging applications ranging from medical to missile electronics. In particular, the pulsed YAG laser is an extremely flexible and versatile too] capable of hermetically sealing microelectronics packages containing sensitive components without damaging them. This paper presents an overview of details that must be considered for successful use of laser welding when addressing electronic package sealing. These include; metallurgical considerations such as alloy and plating selection, weld joint configuration, design of optics, use of protective gases and control of thermal distortions. The primary limitations on use of laser welding electronic for packaging applications are economic ones. The laser itself is a relatively costly device when compared to competing welding equipment. Further, the cost of consumables and repairs can be significant. These facts have relegated laser welding to use only where it presents a distinct quality or reliability advantages over other techniques of electronic package sealing. Because of the unique noncontact and low heat inputs characteristics of laser welding, it is an ideal candidate for sealing electronic packages containing MEMS devices (microelectromechanical systems). This paper addresses how the unique advantages of the pulsed YAG laser can be used to simplify MEMS packaging and deliver a product of improved quality.

  1. Naval Waste Package Design Sensitivity

    SciTech Connect

    T. Schmitt

    2006-12-13

    The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license application design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.

  2. Vacuum Packaging for Microelectromechanical Systems (MEMS)

    DTIC Science & Technology

    2002-10-01

    The Vacuum Packaging for MEMS Program focused on the development of an integrated set of packaging technologies which in totality provide a low cost...high volume product-neutral vacuum packaging capability which addresses all MEMS vacuum packaging requirements. The program balanced the need for...near term component and wafer-level vacuum packaging with the development of advanced high density wafer-level packaging solutions. Three vacuum

  3. Safety evaluation for packaging (onsite) concrete-lined waste packaging

    SciTech Connect

    Romano, T.

    1997-09-25

    The Pacific Northwest National Laboratory developed a package to ship Type A, non-transuranic, fissile excepted quantities of liquid or solid radioactive material and radioactive mixed waste to the Central Waste Complex for storage on the Hanford Site.

  4. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  5. Packaging of solid state devices

    DOEpatents

    Glidden, Steven C.; Sanders, Howard D.

    2006-01-03

    A package for one or more solid state devices in a single module that allows for operation at high voltage, high current, or both high voltage and high current. Low thermal resistance between the solid state devices and an exterior of the package and matched coefficient of thermal expansion between the solid state devices and the materials used in packaging enables high power operation. The solid state devices are soldered between two layers of ceramic with metal traces that interconnect the devices and external contacts. This approach provides a simple method for assembling and encapsulating high power solid state devices.

  6. Microelectronics packaging research directions for aerospace applications

    NASA Technical Reports Server (NTRS)

    Galbraith, L.

    2003-01-01

    The Roadmap begins with an assessment of needs from the microelectronics for aerospace applications viewpoint. Needs Assessment is divided into materials, packaging components, and radiation characterization of packaging.

  7. Xyce(™) Parallel Electronic Simulator

    SciTech Connect

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuit network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.

  8. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  9. Parallel Total Energy

    SciTech Connect

    Wang, Lin-Wang

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  10. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, Michael

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  11. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  12. [The parallel saw blade].

    PubMed

    Mühldorfer-Fodor, M; Hohendorff, B; Prommersberger, K-J; van Schoonhoven, J

    2011-04-01

    For shortening osteotomy, two exactly parallel osteotomies are needed to assure a congruent adaption of the shortened bone after segment resection. This is required for regular bone healing. In addition, it is difficult to shorten a bone to a precise distance using an oblique segment resection. A mobile spacer between two saw blades keeps the distance of the blades exactly parallel during an osteotomy cut. The parallel saw blades from Synthes® are designed for 2, 2.5, 3, 4, and 5 mm shortening distances. Two types of blades are available (e.g., for transverse or oblique osteotomies) to assure precise shortening. Preoperatively, the desired type of osteotomy (transverse or oblique) and the shortening distance has to be determined. Then, the appropriate parallel saw blade is chosen, which is compatible to Synthes® Colibri with an oscillating saw attachment. During the osteotomy cut, the spacer should be kept as close to the bone as possible. Excessive force that may deform the blades should be avoided. Before manipulating the bone ends, it is important to determine that the bone is completely dissected by both saw blades to prevent fracturing of the corticalis with bony spurs. The shortening osteotomy is mainly fixated by plate osteosynthesis. For compression of the bone ends, the screws should be placed eccentrically in the plate holes. For an oblique osteotomy, an additional lag screw should be used.

  13. Parallel Coordinate Axes.

    ERIC Educational Resources Information Center

    Friedlander, Alex; And Others

    1982-01-01

    Several methods of numerical mappings other than the usual cartesian coordinate system are considered. Some examples using parallel axes representation, which are seen to lead to aesthetically pleasing or interesting configurations, are presented. Exercises with alternative representations can stimulate pupil imagination and exploration in…

  14. Parallel Dislocation Simulator

    SciTech Connect

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  15. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  16. Progress in parallelizing XOOPIC

    NASA Astrophysics Data System (ADS)

    Mardahl, Peter; Verboncoeur, J. P.

    1997-11-01

    XOOPIC (Object Orient Particle in Cell code for X11-based Unix workstations) is presently a serial 2-D 3v particle-in-cell plasma simulation (J.P. Verboncoeur, A.B. Langdon, and N.T. Gladd, ``An object-oriented electromagnetic PIC code.'' Computer Physics Communications 87 (1995) 199-211.). The present effort focuses on using parallel and distributed processing to optimize the simulation for large problems. The benefits include increased capacity for memory intensive problems, and improved performance for processor-intensive problems. The MPI library is used to enable the parallel version to be easily ported to massively parallel, SMP, and distributed computers. The philosophy employed here is to spatially decompose the system into computational regions separated by 'virtual boundaries', objects which contain the local data and algorithms to perform the local field solve and particle communication between regions. This implementation will reduce the changes required in the rest of the program by parallelization. Specific implementation details such as the hiding of communication latency behind local computation will also be discussed.

  17. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Quinn O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  18. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  19. Parallel Multigrid Equation Solver

    SciTech Connect

    Adams, Mark

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  20. Electrical Performance of a High Temperature 32-I/O HTCC Alumina Package

    NASA Technical Reports Server (NTRS)

    Chen, Liang-Yu; Neudeck, Philip G.; Spry, David J.; Beheim, Glenn M.; Hunter, Gary W.

    2016-01-01

    A high temperature co-fired ceramic (HTCC) alumina material was previously electrically tested at temperatures up to 550 C, and demonstrated improved dielectric performance at high temperatures compared with the 96% alumina substrate that we used before, suggesting its potential use for high temperature packaging applications. This paper introduces a prototype 32-I/O (input/output) HTCC alumina package with platinum conductor for 500 C low-power silicon carbide (SiC) integrated circuits. The design and electrical performance of this package including parasitic capacitance and parallel conductance of neighboring I/Os from 100 Hz to 1 MHz in a temperature range from room temperature to 550 C are discussed in detail. The parasitic capacitance and parallel conductance of this package in the entire frequency and temperature ranges measured does not exceed 1.5 pF and 0.05 microsiemens, respectively. SiC integrated circuits using this package and compatible printed circuit board have been successfully tested at 500 C for over 3736 hours continuously, and at 700 C for over 140 hours. Some test examples of SiC integrated circuits with this packaging system are presented. This package is the key to prolonged T greater than or equal to 500 C operational testing of the new generation of SiC high temperature integrated circuits and other devices currently under development at NASA Glenn Research Center.

  1. A portable implementation of ARPACK for distributed memory parallel architectures

    SciTech Connect

    Maschhoff, K.J.; Sorensen, D.C.

    1996-12-31

    ARPACK is a package of Fortran 77 subroutines which implement the Implicitly Restarted Arnoldi Method used for solving large sparse eigenvalue problems. A parallel implementation of ARPACK is presented which is portable across a wide range of distributed memory platforms and requires minimal changes to the serial code. The communication layers used for message passing are the Basic Linear Algebra Communication Subprograms (BLACS) developed for the ScaLAPACK project and Message Passing Interface(MPI).

  2. Loss of the DNA Methyltransferase MET1 Induces H3K9 Hypermethylation at PcG Target Genes and Redistribution of H3K27 Trimethylation to Transposons in Arabidopsis thaliana

    PubMed Central

    Bernatavichute, Yana; Johnson, Elizabeth; Klein, Gregor; Schubert, Daniel; Jacobsen, Steven E.

    2012-01-01

    Dimethylation of histone H3 lysine 9 (H3K9m2) and trimethylation of histone H3 lysine 27 (H3K27m3) are two hallmarks of transcriptional repression in many organisms. In Arabidopsis thaliana, H3K27m3 is targeted by Polycomb Group (PcG) proteins and is associated with silent protein-coding genes, while H3K9m2 is correlated with DNA methylation and is associated with transposons and repetitive sequences. Recently, ectopic genic DNA methylation in the CHG context (where H is any base except G) has been observed in globally DNA hypomethylated mutants such as met1, but neither the nature of the hypermethylated loci nor the biological significance of this epigenetic phenomenon have been investigated. Here, we generated high-resolution, genome-wide maps of both H3K9m2 and H3K27m3 in wild-type and met1 plants, which we integrated with transcriptional data, to explore the relationships between these two marks. We found that ectopic H3K9m2 observed in met1 can be due to defects in IBM1-mediated H3K9m2 demethylation at some sites, but most importantly targets H3K27m3-marked genes, suggesting an interplay between these two silencing marks. Furthermore, H3K9m2/DNA-hypermethylation at these PcG targets in met1 is coupled with a decrease in H3K27m3 marks, whereas CG/H3K9m2 hypomethylated transposons become ectopically H3K27m3 hypermethylated. Our results bear interesting similarities with cancer cells, which show global losses of DNA methylation but ectopic hypermethylation of genes previously marked by H3K27m3. PMID:23209430

  3. Spack: the Supercomputing Package Manager

    SciTech Connect

    Gamblin, T.

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstacle deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.

  4. High Frequency Electronic Packaging Technology

    NASA Technical Reports Server (NTRS)

    Herman, M.; Lowry, L.; Lee, K.; Kolawa, E.; Tulintseff, A.; Shalkhauser, K.; Whitaker, J.; Piket-May, M.

    1994-01-01

    Commercial and government communication, radar, and information systems face the challenge of cost and mass reduction via the application of advanced packaging technology. A majority of both government and industry support has been focused on low frequency digital electronics.

  5. New Packaging for Amplifier Slabs

    SciTech Connect

    Riley, M.; Thorsness, C.; Suratwala, T.; Steele, R.; Rogowski, G.

    2015-03-18

    The following memo provides a discussion and detailed procedure for a new finished amplifier slab shipping and storage container. The new package is designed to maintain an environment of <5% RH to minimize weathering.

  6. High Frequency Electronic Packaging Technology

    NASA Technical Reports Server (NTRS)

    Herman, M.; Lowry, L.; Lee, K.; Kolawa, E.; Tulintseff, A.; Shalkhauser, K.; Whitaker, J.; Piket-May, M.

    1994-01-01

    Commercial and government communication, radar, and information systems face the challenge of cost and mass reduction via the application of advanced packaging technology. A majority of both government and industry support has been focused on low frequency digital electronics.

  7. Packaged bulk micromachined triglyceride biosensor

    NASA Astrophysics Data System (ADS)

    Mohanasundaram, S. V.; Mercy, S.; Harikrishna, P. V.; Rani, Kailash; Bhattacharya, Enakshi; Chadha, Anju

    2010-02-01

    Estimation of triglyceride concentration is important for the health and food industries. Use of solid state biosensors like Electrolyte Insulator Semiconductor Capacitors (EISCAP) ensures ease in operation with good accuracy and sensitivity when compared to conventional sensors. In this paper we report on packaging of miniaturized EISCAP sensors on silicon. The packaging involves glass to silicon bonding using adhesive. Since this kind of packaging is done at room temperature, it cannot damage the thin dielectric layers on the silicon wafer unlike the high temperature anodic bonding technique and can be used for sensors with immobilized enzyme without denaturing the enzyme. The packaging also involves a teflon capping arrangement which helps in easy handling of the bio-analyte solutions. The capping solves two problems. Firstly, it helps in the immobilization process where it ensures the enzyme immobilization happens only on one pit and secondly it helps with easy transport of the bio-analyte into the sensor pit for measurements.

  8. Handling difficult materials: Aseptic packaging

    SciTech Connect

    Lieb, K.

    1994-03-01

    Since aseptic packages, or drink boxes, were introduced in the US in the early 1980s, they have been praised for their convenience and berated for their lack of recyclability. As a result, aseptic packaging collection has been linked with that of milk cartons to increase the volume. The intervening years since the introduction of aseptic packaging have seen the drink box industry aggressively trying to create a recycling market for the boxes. Communities and schools have initiated programs, and recycling firms have allocated resources to see whether recycling aseptic packaging can work. Drink boxes are now recycled in 2.3 million homes in 15 states, and in 1,655 schools in 17 states. They are typically collected in school and curbside programs with other polyethylene coated (laminated) paperboard products such a milk cartons, and then baled and shipped to five major paper companies for recycling at eight facilities.

  9. Packaging Review Guide for Reviewing Safety Analysis Reports for Packagings

    SciTech Connect

    DiSabatino, A; Biswas, D; DeMicco, M; Fisher, L E; Hafner, R; Haslam, J; Mok, G; Patel, C; Russell, E

    2007-04-12

    This Packaging Review Guide (PRG) provides guidance for Department of Energy (DOE) review and approval of packagings to transport fissile and Type B quantities of radioactive material. It fulfills, in part, the requirements of DOE Order 460.1B for the Headquarters Certifying Official to establish standards and to provide guidance for the preparation of Safety Analysis Reports for Packagings (SARPs). This PRG is intended for use by the Headquarters Certifying Official and his or her review staff, DOE Secretarial offices, operations/field offices, and applicants for DOE packaging approval. This PRG is generally organized at the section level in a format similar to that recommended in Regulatory Guide 7.9 (RG 7.9). One notable exception is the addition of Section 9 (Quality Assurance), which is not included as a separate chapter in RG 7.9. Within each section, this PRG addresses the technical and regulatory bases for the review, the manner in which the review is accomplished, and findings that are generally applicable for a package that meets the approval standards. This Packaging Review Guide (PRG) provides guidance for DOE review and approval of packagings to transport fissile and Type B quantities of radioactive material. It fulfills, in part, the requirements of DOE O 460.1B for the Headquarters Certifying Official to establish standards and to provide guidance for the preparation of Safety Analysis Reports for Packagings (SARPs). This PRG is intended for use by the Headquarters Certifying Official and his review staff, DOE Secretarial offices, operations/field offices, and applicants for DOE packaging approval. The primary objectives of this PRG are to: (1) Summarize the regulatory requirements for package approval; (2) Describe the technical review procedures by which DOE determines that these requirements have been satisfied; (3) Establish and maintain the quality and uniformity of reviews; (4) Define the base from which to evaluate proposed changes in scope

  10. Watermarking spot colors in packaging

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang

    2015-03-01

    In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.

  11. Polyhydroxyalkanoates (PHA) Bioplastic Packaging Materials

    DTIC Science & Technology

    2010-05-01

    FINAL REPORT Polyhydroxyalkanoates (PHA) Bioplastic Packaging Materials SERDP Project WP-1478 MAY 2010 Dr.Chris Schwier Metabolix... Bioplastic Packaging Materials 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER SI 1478 Dr. Chris Schwier 5e. TASK...polymers were produced using blends of branched, long chain-length PHA polymers with linear PHA polymers.      15. SUBJECT TERMS Bioplastic

  12. TRU waste transportation package development

    SciTech Connect

    Eakes, R. G.; Lamoreaux, G. H.; Romesberg, L. E.; Sutherland, S. H.; Duffey, T. A.

    1980-01-01

    Inventories of the transuranic wastes buried or stored at various US DOE sites are tabulated. The leading conceptual design of Type-B packaging for contact-handled transuranic waste is the Transuranic Package Transporter (TRUPACT), a large metal container comprising inner and outer tubular steel frameworks which are separated by rigid polyurethane foam and sheathed with steel plate. Testing of TRUPACT is reported. The schedule for its development is given. 6 figures. (DLC)

  13. Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.

    SciTech Connect

    Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting; Russo, Thomas V.; Schiek, Richard L.; Sholander, Peter E.; Thornquist, Heidi K.; Verley, Jason C.

    2016-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.

  14. PVFS : a parallel file system for linux clusters

    SciTech Connect

    Carns, P. H.; Ligon, W. B., III; Ross, R. B.; Thakur, R.

    2000-04-27

    As Linux clusters have matured as platforms for low-cost, high-performance parallel computing, software packages to provide many key services have emerged, especially in areas such as message passing and net-working. One area devoid of support, however, has been parallel file systems, which are critical for high-performance I/O on such clusters. We have developed a parallel file system for Linux clusters, called the Parallel Virtual File System (PVFS). PVFS is intended both as a high-performance parallel file system that anyone can download and use and as a tool for pursuing further research in parallel I/O and parallel file systems for Linux clusters. In this paper, we describe the design and implementation of PVFS and present performance results on the Chiba City cluster at Argonne. We provide performance results for a workload of concurrent reads and writes for various numbers of compute nodes, I/O nodes, and I/O request sizes. We also present performance results for MPI-IO on PVFS, both for a concurrent read/write workload and for the BTIO benchmark. We compare the I/O performance when using a Myrinet network versus a fast-ethernet network for I/O-related communication in PVFS. We obtained read and write bandwidths as high as 700 Mbytes/sec with Myrinet and 225 Mbytes/sec with fast ethernet.

  15. Packaging food for radiation processing

    NASA Astrophysics Data System (ADS)

    Komolprasert, Vanee

    2016-12-01

    Irradiation can play an important role in reducing pathogens that cause food borne illness. Food processors and food safety experts prefer that food be irradiated after packaging to prevent post-irradiation contamination. Food irradiation has been studied for the last century. However, the implementation of irradiation on prepackaged food still faces challenges on how to assess the suitability and safety of these packaging materials used during irradiation. Irradiation is known to induce chemical changes to the food packaging materials resulting in the formation of breakdown products, so called radiolysis products (RP), which may migrate into foods and affect the safety of the irradiated foods. Therefore, the safety of the food packaging material (both polymers and adjuvants) must be determined to ensure safety of irradiated packaged food. Evaluating the safety of food packaging materials presents technical challenges because of the range of possible chemicals generated by ionizing radiation. These challenges and the U.S. regulations on food irradiation are discussed in this article.

  16. Rapid Active Sampling Package

    NASA Technical Reports Server (NTRS)

    Peters, Gregory

    2010-01-01

    A field-deployable, battery-powered Rapid Active Sampling Package (RASP), originally designed for sampling strong materials during lunar and planetary missions, shows strong utility for terrestrial geological use. The technology is proving to be simple and effective for sampling and processing materials of strength. Although this originally was intended for planetary and lunar applications, the RASP is very useful as a powered hand tool for geologists and the mining industry to quickly sample and process rocks in the field on Earth. The RASP allows geologists to surgically acquire samples of rock for later laboratory analysis. This tool, roughly the size of a wrench, allows the user to cut away swaths of weathering rinds, revealing pristine rock surfaces for observation and subsequent sampling with the same tool. RASPing deeper (.3.5 cm) exposes single rock strata in-situ. Where a geologist fs hammer can only expose unweathered layers of rock, the RASP can do the same, and then has the added ability to capture and process samples into powder with particle sizes less than 150 microns, making it easier for XRD/XRF (x-ray diffraction/x-ray fluorescence). The tool uses a rotating rasp bit (or two counter-rotating bits) that resides inside or above the catch container. The container has an open slot to allow the bit to extend outside the container and to allow cuttings to enter and be caught. When the slot and rasp bit are in contact with a substrate, the bit is plunged into it in a matter of seconds to reach pristine rock. A user in the field may sample a rock multiple times at multiple depths in minutes, instead of having to cut out huge, heavy rock samples for transport back to a lab for analysis. Because of the speed and accuracy of the RASP, hundreds of samples can be taken in one day. RASP-acquired samples are small and easily carried. A user can characterize more area in less time than by using conventional methods. The field-deployable RASP used a Ni

  17. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  18. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  19. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  20. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-05

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. © 2015 The Author(s).

  1. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  2. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  3. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  4. Homology, convergence and parallelism

    PubMed Central

    Ghiselin, Michael T.

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  5. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  6. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  7. Parallel Computing in Optimization.

    DTIC Science & Technology

    1984-10-01

    include : Heller [1978] and Sameh [1977] (surveys of algorithms), Duff [1983], Fong and Jordan [1977]. Jordan [1979]. and Rodrigue [1982] (all mainly...constrained concave function by partition of feasible domain", Mathematics of Operations Research 8, pp. A. Sameh [1977, "Numerical parallel algorithms...a survey", in High Speed Computer and Algorithm Organization, D. Kuck, D. Lawrie, and A. Sameh , eds., Academic Press, pp. 207-228. 1,. J. Siegel

  8. Development of Parallel GSSHA

    DTIC Science & Technology

    2013-09-01

    C en te r Paul R. Eller , Jing-Ru C. Cheng, Aaron R. Byrd, Charles W. Downer, and Nawa Pradhan September 2013 Approved for public release...Program ERDC TR-13-8 September 2013 Development of Parallel GSSHA Paul R. Eller and Jing-Ru C. Cheng Information Technology Laboratory US Army Engineer...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Eller , Ruth Cheng, Aaron Byrd, Chuck Downer, and Nawa Pradhan 5d. PROJECT NUMBER

  9. Parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Camberos, Jose; Merriam, Marshal

    1991-01-01

    A parallel unstructured grid generation algorithm is presented and implemented on the Hypercube. Different processor hierarchies are discussed, and the appropraite hierarchies for mesh generation and mesh smoothing are selected. A domain-splitting algorithm for unstructured grids which tries to minimize the surface-to-volume ratio of each subdomain is described. This splitting algorithm is employed both for grid generation and grid smoothing. Results obtained on the Hypercube demonstrate the effectiveness of the algorithms developed.

  10. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  11. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  12. Prevention policies addressing packaging and packaging waste: Some emerging trends.

    PubMed

    Tencati, Antonio; Pogutz, Stefano; Moda, Beatrice; Brambilla, Matteo; Cacia, Claudia

    2016-10-01

    Packaging waste is a major issue in several countries. Representing in industrialized countries around 30-35% of municipal solid waste yearly generated, this waste stream has steadily grown over the years even if, especially in Europe, specific recycling and recovery targets have been fixed. Therefore, an increasing attention starts to be devoted to prevention measures and interventions. Filling a gap in the current literature, this explorative paper is a first attempt to map the increasingly important phenomenon of prevention policies in the packaging sector. Through a theoretical sampling, 11 countries/states (7 in and 4 outside Europe) have been selected and analyzed by gathering and studying primary and secondary data. Results show evidence of three specific trends in packaging waste prevention policies: fostering the adoption of measures directed at improving packaging design and production through an extensive use of the life cycle assessment; raising the awareness of final consumers by increasing the accountability of firms; promoting collaborative efforts along the packaging supply chains.

  13. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  14. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  15. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  16. Think INSIDE the Box: Package Engineering

    ERIC Educational Resources Information Center

    Snyder, Mark; Painter, Donna

    2014-01-01

    Most products people purchase, keep in their homes, and often discard, are typically packaged in some way. Packaging is so prevalent in daily lives that many of take it for granted. That is by design-the expectation of good packaging is that it exists for the sake of the product. The primary purposes of any package (to contain, inform, display,…

  17. 7 CFR 58.626 - Packaging equipment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Packaging equipment. 58.626 Section 58.626 Agriculture....626 Packaging equipment. Packaging equipment designed to mechanically fill and close single service... Standards for Equipment for Packaging Frozen Desserts and Cottage Cheese. Quality Specifications for...

  18. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Unit packaging. 157.27 Section 157.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide...

  19. 49 CFR 173.29 - Empty packagings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Empty packagings. 173.29 Section 173.29... SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.29 Empty packagings. (a) General. Except as otherwise provided in this section, an empty packaging containing only the residue of...

  20. 27 CFR 19.276 - Package scales.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... Scales used to weigh packages designed to hold 10 wine gallons or less shall indicate weight in ounces or... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Package scales. 19.276... Package scales. Proprietors shall ensure the accuracy of scales used for weighing packages of spirits...

  1. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  2. Think INSIDE the Box: Package Engineering

    ERIC Educational Resources Information Center

    Snyder, Mark; Painter, Donna

    2014-01-01

    Most products people purchase, keep in their homes, and often discard, are typically packaged in some way. Packaging is so prevalent in daily lives that many of take it for granted. That is by design-the expectation of good packaging is that it exists for the sake of the product. The primary purposes of any package (to contain, inform, display,…

  3. Optimising a parallel conjugate gradient solver

    SciTech Connect

    Field, M.R.

    1996-12-31

    This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

  4. Green Packaging Management of Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Zhang, Guirong; Zhao, Zongjian

    From the connotation of green logistics management, we discuss the principles of green packaging, and from the two levels of government and enterprises, we put forward a specific management strategy. The management of green packaging can be directly and indirectly promoted by laws, regulations, taxation, institutional and other measures. The government can also promote new investment to the development of green packaging materials, and establish specialized institutions to identify new packaging materials, standardization of packaging must also be accomplished through the power of the government. Business units of large scale through the packaging and container-based to reduce the use of packaging materials, develop and use green packaging materials and easy recycling packaging materials for proper packaging.

  5. Material efficiency in Dutch packaging policy.

    PubMed

    Worrell, Ernst; van Sluisveld, Mariësse A E

    2013-03-13

    Packaging materials are one of the largest contributors to municipal solid waste generation. In this paper, we evaluate the material impacts of packaging policy in The Netherlands, focusing on the role of material efficiency (or waste prevention). Since 1991, five different policies have been implemented to reduce the environmental impact of packaging. The analysis shows that Dutch packaging policies helped to reduce the total packaging volume until 1999. After 2000, packaging consumption increased more rapidly than the baseline, suggesting that policy measures were not effective. Generally, we see limited attention to material efficiency to reduce packaging material use. For this purpose, we tried to gain more insight in recent activities on material efficiency, by building a database of packaging prevention initiatives. We identified 131 alterations to packaging implemented in the period 2005-2010, of which weight reduction was the predominant approach. More appropriate packaging policy is needed to increase the effectiveness of policies, with special attention to material efficiency.

  6. PARAMESH: A Parallel, Adaptive Mesh Refinement Toolkit and Performance of the ASCI/FLASH code

    NASA Astrophysics Data System (ADS)

    Olson, K. M.; MacNeice, P.; Fryxell, B.; Ricker, P.; Timmes, F. X.; Zingale, M.

    1999-12-01

    We describe a package of routines known as PARAMESH which enables a user to easily convert an existing serial, uniform grid code to a parallel code with adaptive-mesh refinement. The package does this through the use of a block-structured form of AMR in combination with a tree data structure for distributing blocks to processors. We also describe some of the applications which have been developed using PARAMESH with special emaphasis on the ASCI/FLASH code. Performance results are also discussed for a variety of parallel architectures.

  7. 75 FR 60333 - Hazardous Material; Miscellaneous Packaging Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... Hazardous Material; Miscellaneous Packaging Amendments AGENCY: Pipeline and Hazardous Materials Safety... materials packages may be considered a bulk packaging. The September 1, 2006 NPRM definition for ``bulk... erroneously stated Large Packagings would contain hazardous materials without an intermediate packaging,...

  8. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  9. Method of forming a package for mems-based fuel cell

    DOEpatents

    Morse, Jeffrey D.; Jankowski, Alan F.

    2004-11-23

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMOS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  10. Method of forming a package for MEMS-based fuel cell

    DOEpatents

    Morse, Jeffrey D; Jankowski, Alan F

    2013-05-21

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  11. The reduction of packaging waste

    SciTech Connect

    Raney, E.A.; Hogan, J.J.; McCollom, M.L.; Meyer, R.J.

    1994-04-01

    Nationwide, packaging waste comprises approximately one-third of the waste disposed in sanitary landfills. the US Department of Energy (DOE) generated close to 90,000 metric tons of sanitary waste. With roughly one-third of that being packaging waste, approximately 30,000 metric tons are generated per year. The purpose of the Reduction of Packaging Waste project was to investigate opportunities to reduce this packaging waste through source reduction and recycling. The project was divided into three areas: procurement, onsite packaging and distribution, and recycling. Waste minimization opportunities were identified and investigated within each area, several of which were chosen for further study and small-scale testing at the Hanford Site. Test results, were compiled into five ``how-to`` recipes for implementation at other sites. The subject of the recipes are as follows: (1) Vendor Participation Program; (2) Reusable Containers System; (3) Shrink-wrap System -- Plastic and Corrugated Cardboard Waste Reduction; (4) Cardboard Recycling ; and (5) Wood Recycling.

  12. Reference waste package environment report

    SciTech Connect

    Glassley, W.E.

    1986-10-01

    One of three candidate repository sites for high-level radioactive waste packages is located at Yucca Mountain, Nevada, in rhyolitic tuff 700 to 1400 ft above the static water table. Calculations indicate that the package environment will experience a maximum temperature of {similar_to}230{sup 0}C at 9 years after emplacement. For the next 300 years the rock within 1 m of the waste packages will remain dehydrated. Preliminary results suggest that the waste package radiation field will have very little effect on the mechanical properties of the rock. Radiolysis products will have a negligible effect on the rock even after rehydration. Unfractured specimens of repository rock show no change in hydrologic characteristics during repeated dehydration-rehydration cycles. Fractured samples with initially high permeabilities show a striking permeability decrease during dehydration-rehydration cycling, which may be due to fracture healing via deposition of silica. Rock-water interaction studies demonstrate low and benign levels of anions and most cations. The development of sorptive secondary phases such as zeolites and clays suggests that anticipated rock-water interaction may produce beneficial changes in the package environment.

  13. Capillary-driven automatic packaging.

    PubMed

    Ding, Yuzhe; Hong, Lingfei; Nie, Baoqing; Lam, Kit S; Pan, Tingrui

    2011-04-21

    Packaging continues to be one of the most challenging steps in micro-nanofabrication, as many emerging techniques (e.g., soft lithography) are incompatible with the standard high-precision alignment and bonding equipment. In this paper, we present a simple-to-operate, easy-to-adapt packaging strategy, referred to as Capillary-driven Automatic Packaging (CAP), to achieve automatic packaging process, including the desired features of spontaneous alignment and bonding, wide applicability to various materials, potential scalability, and direct incorporation in the layout. Specifically, self-alignment and self-engagement of the CAP process induced by the interfacial capillary interactions between a liquid capillary bridge and the top and bottom substrates have been experimentally characterized and theoretically analyzed with scalable implications. High-precision alignment (of less than 10 µm) and outstanding bonding performance (up to 300 kPa) has been reliably obtained. In addition, a 3D microfluidic network, aligned and bonded by the CAP technique, has been devised to demonstrate the applicability of this facile yet robust packaging technique for emerging microfluidic and bioengineering applications.

  14. Nanocellulose in green food packaging.

    PubMed

    Vilarinho, Fernanda; Sanches Silva, Ana; Vaz, M Fátima; Farinha, José Paulo

    2017-01-26

    The development of packaging materials with new functionalities and lower environmental impact is now an urgent need of our society. On one hand, the shelf-life extension of packaged products can be an answer to the exponential increase of worldwide demand for food. On the other hand, uncertainty of crude oil prices and reserves has imposed the necessity to find raw materials to replace oil-derived polymers. Additionally, consumers' awareness toward environmental issues increasingly pushes industries to look with renewed interest to "green" solutions. In response to these issues, numerous polymers have been exploited to develop biodegradable food packaging materials. Although the use of biopolymers has been limited due to their poor mechanical and barrier properties, these can be enhanced by adding reinforcing nanosized components to form nanocomposites. Cellulose is probably the most used and well-known renewable and sustainable raw material. The mechanical properties, reinforcing capabilities, abundance, low density, and biodegradability of nanosized cellulose make it an ideal candidate for polymer nanocomposites processing. Here we review the potential applications of cellulose based nanocomposites in food packaging materials, highlighting the several types of biopolymers with nanocellulose fillers that have been used to form bio-nanocomposite materials. The trends in nanocellulose packaging applications are also addressed.

  15. Modified Atmosphere Packaging and Its Feasibility for Military Feeding Systems

    DTIC Science & Technology

    1994-12-01

    prevent mold growth and is 1 a) b) Skin Tight Gas Atmosphere akingPackagingFood , :,i: Residual Barrier Packaging c) d) Gourmet Food Skintight Packaging...Bardier Brea Packaging ... Barrier packaging incorporates: Oxygen scavenger Moisture absorbent Bread Custom Permeability Slow cooked after packaged...packaging; c) sous vide packaging; d) active packaging. 2 used mainly for starch or bread products at ambient temperatures. An active packaging with a

  16. QCMPI: A parallel environment for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    QCMPI is a quantum computer (QC) simulation package written in Fortran 90 with parallel processing capabilities. It is an accessible research tool that permits rapid evaluation of quantum algorithms for a large number of qubits and for various "noise" scenarios. The prime motivation for developing QCMPI is to facilitate numerical examination of not only how QC algorithms work, but also to include noise, decoherence, and attenuation effects and to evaluate the efficacy of error correction schemes. The present work builds on an earlier Mathematica code QDENSITY, which is mainly a pedagogic tool. In that earlier work, although the density matrix formulation was featured, the description using state vectors was also provided. In QCMPI, the stress is on state vectors, in order to employ a large number of qubits. The parallel processing feature is implemented by using the Message-Passing Interface (MPI) protocol. A description of how to spread the wave function components over many processors is provided, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors. These operators include the standard Pauli, Hadamard, CNOT and CPHASE gates and also Quantum Fourier transformation. These operators make up the actions needed in QC. Codes for Grover's search and Shor's factoring algorithms are provided as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to alternate noise effects, which corresponds to the idea of solving a stochastic Schrödinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its eigenvalues and associated entropy. Potential applications of this powerful tool include studies of the stability and correction of QC processes using Hamiltonian based dynamics. Program summaryProgram title: QCMPI Catalogue identifier: AECS_v1_0 Program summary URL

  17. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  18. Status of TRANSP Parallel Services

    NASA Astrophysics Data System (ADS)

    Indireshkumar, K.; Andre, Robert; McCune, Douglas; Randerson, Lewis

    2006-10-01

    The PPPL TRANSP code suite has been used successfully over many years to carry out time dependent simulations of tokamak plasmas. However, accurately modeling certain phenomena such as RF heating and fast ion behavior using TRANSP requires extensive computational power and will benefit from parallelization. Parallelizing all of TRANSP is not required and parts will run sequentially while other parts run parallelized. To efficiently use a site's parallel services, the parallelized TRANSP modules are deployed to a shared ``parallel service'' on a separate cluster. The PPPL Monte Carlo fast ion module NUBEAM and the MIT RF module TORIC are the first TRANSP modules to be so deployed. This poster will show the performance scaling of these modules within the parallel server. Communications between the serial client and the parallel server will be described in detail, and measurements of startup and communications overhead will be shown. Physics modeling benefits for TRANSP users will be assessed.

  19. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  20. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  1. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  2. Visualizing tumor evolution with the fishplot package for R.

    PubMed

    Miller, Christopher A; McMichael, Joshua; Dang, Ha X; Maher, Christopher A; Ding, Li; Ley, Timothy J; Mardis, Elaine R; Wilson, Richard K

    2016-11-07

    Massively-parallel sequencing at depth is now enabling tumor heterogeneity and evolution to be characterized in unprecedented detail. Tracking these changes in clonal architecture often provides insight into therapeutic response and resistance. In complex cases involving multiple timepoints, standard visualizations, such as scatterplots, can be difficult to interpret. Current data visualization methods are also typically manual and laborious, and often only approximate subclonal fractions. We have developed an R package that accurately and intuitively displays changes in clonal structure over time. It requires simple input data and produces illustrative and easy-to-interpret graphs suitable for diagnosis, presentation, and publication. The simplicity, power, and flexibility of this tool make it valuable for visualizing tumor evolution, and it has potential utility in both research and clinical settings. The fishplot package is available at https://github.com/chrisamiller/fishplot .

  3. Traffic simulations on parallel computers using domain decomposition techniques

    SciTech Connect

    Hanebutte, U.R.; Tentner, A.M.

    1995-12-31

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be achieved by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic simulations with the standard simulation package TRAF-NETSIM on a 128 nodes IBM SPx parallel supercomputer as well as on a cluster of SUN workstations. Whilst this particular parallel implementation is based on NETSIM, a microscopic traffic simulation model, the presented strategy is applicable to a broad class of traffic simulations. An outer iteration loop must be introduced in order to converge to a global solution. A performance study that utilizes a scalable test network that consist of square-grids is presented, which addresses the performance penalty introduced by the additional iteration loop.

  4. A parallel implementation of symmetric band reduction using PLAPACK

    SciTech Connect

    Wu, Yuan-Jye J.; Bischof, C.H.; Alpatov, P.A.

    1996-12-31

    Successive band reduction (SBR) is a two-phase approach for reducing a full symmetric matrix to tridiagonal (or narrow banded) form. In its simplest case, it consists of a full-to-band reduction followed by a band-to-tridiagonal reduction. Its richness in BLAS-3 operations makes it potentially more efficient on high-performance architectures than the traditional tridiagonalization method. However, a scalable, portable, general-purpose parallel implementation of SBR is still not available. In this article, we review some existing parallel tridiagonalization routines and describe the implementation of a full-to-band reduction routine using PLAPACK as a first step toward a parallel SBR toolbox. The PLAPACK-based routine turns out to be simple and efficient and, unlike the other existing packages, does not suffer restrictions on physical data layout or algorithmic block size.

  5. The Structure of Parallel Algorithms.

    DTIC Science & Technology

    1979-08-01

    parallel architectures and parallel algorithms see [Anderson and Jensen 75, Stone 75, Kung 76, Enslow 77, Kuck 77, Ramamoorthy and Li 77, Sameh 77, Heller...the Routing Time on a Parallel Computer with a Fixed Interconnection Network, In Kuck., D. J., Lawrie, D.H. and Sameh , A.H., editor, High Speed...Letters 5(4):107-112, October 1976. [ Sameh 77] Sameh , A.H. Numerical Parallel Algorithms -- A Survey. In Hifh Speed Computer and AlgorLthm Organization

  6. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  7. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  8. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  9. Massively Parallel Genetics.

    PubMed

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  10. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  11. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  12. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  13. Comparing Aperture Photometry Software Packages

    NASA Astrophysics Data System (ADS)

    Bajaj, V.; Khandrika, H.

    2017-04-01

    Multiple software packages exist to perform aperture photometry on HST data. Three of the most used softwares are the Python package PhotUtils, the IDL function APER, and the IRAF/PyRAF package DAOPHOT. The results produced by DAOPHOT are slightly incorrect, at approximately 0.1% too large for WFC3/IR images measured with a 3-pixel aperture (PhotUtils and APER produce the correct results). The magnitude of the DAOPHOT discrepancy is dependent on the type of source and filter used (as this impacts the PSF) due to DAOPHOT's approximation of a circle as a slightly larger irregular polygon. We present a quantification of this error for WFC3/IR data, though the analysis is applicable for any small-aperture photometry.

  14. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  15. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  16. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  17. Flexible packaging for PV modules

    NASA Astrophysics Data System (ADS)

    Dhere, Neelkanth G.

    2008-08-01

    Economic, flexible packages that provide needed level of protection to organic and some other PV cells over >25-years have not yet been developed. However, flexible packaging is essential in niche large-scale applications. Typical configuration used in flexible photovoltaic (PV) module packaging is transparent frontsheet/encapsulant/PV cells/flexible substrate. Besides flexibility of various components, the solder bonds should also be flexible and resistant to fatigue due to cyclic loading. Flexible front sheets should provide optical transparency, mechanical protection, scratch resistance, dielectric isolation, water resistance, UV stability and adhesion to encapsulant. Examples are Tefzel, Tedlar and Silicone. Dirt can get embedded in soft layers such as silicone and obscure light. Water vapor transmittance rate (WVTR) of polymer films used in the food packaging industry as moisture barriers are ~0.05 g/(m2.day) under ambient conditions. In comparison, light emitting diodes employ packaging components that have WVTR of ~10-6 g/(m2.day). WVTR of polymer sheets can be improved by coating them with dense inorganic/organic multilayers. Ethylene vinyl acetate, an amorphous copolymer used predominantly by the PV industry has very high O2 and H2O diffusivity. Quaternary carbon chains (such as acetate) in a polymer lead to cleavage and loss of adhesional strength at relatively low exposures. Reactivity of PV module components increases in presence of O2 and H2O. Adhesional strength degrades due to the breakdown of structure of polymer by reactive, free radicals formed by high-energy radiation. Free radical formation in polymers is reduced when the aromatic rings are attached at regular intervals. This paper will review flexible packaging for PV modules.

  18. CRRES Microelectronics Test Package (MEP)

    NASA Astrophysics Data System (ADS)

    Mullen, E. G.; Ray, K. P.

    1993-04-01

    The Microelectronics Test Package (MEP) flown on board the Combined Release and Radiation Effects Satellite (CRRES) contained over 60 device types and approximately 400 total devices which were tested for both single event upset (SEU) and total dose (parametric degradation and annealing). A description of the experiment, the method of testing devices, and the structure of data acquisition are presented. Sample flight data are shown. These included SEUs from a GaAs 1 K RAM during the March 1991 solar flare, and a comparison between passive shielding and a specially designed spot shielding package.

  19. CRRES microelectronics test package (MEP)

    SciTech Connect

    Mullen, E.G.; Ray, K.P. )

    1993-04-01

    The Microelectronics Test Package (MEP) flown on board the Combined Release and Radiation Effects Satellite (CRRES) contained over 60 device types and approximately 400 total devices which were tested for both single event upset (SEU) and total dose (parametric degradation and annealing). A description of the experiment, the method of testing devices, and the structure of data acquisition are presented. Sample flight data are shown. These included SEUs from a GaAs 1 K RAM during the March 1991 solar flare, and a comparison between passive shielding and a specially designed spot shielding package.

  20. An Arbitrary Precision Computation Package

    SciTech Connect

    2003-06-14

    This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilities into an easy-to-use interactive program.

  1. Truss Performance and Packaging Metrics

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M.; Collins, Timothy J.; Doggett, William; Dorsey, John; Watson, Judith

    2006-01-01

    In the present paper a set of performance metrics are derived from first principals to assess the efficiency of competing space truss structural concepts in terms of mass, stiffness, and strength, for designs that are constrained by packaging. The use of these performance metrics provides unique insight into the primary drivers for lowering structural mass and packaging volume as well as enabling quantitative concept performance evaluation and comparison. To demonstrate the use of these performance metrics, data for existing structural concepts are plotted and discussed. Structural performance data is presented for various mechanical deployable concepts, for erectable structures, and for rigidizable structures.

  2. The role of packaging film permselectivity in modified atmosphere packaging.

    PubMed

    Al-Ati, Tareq; Hotchkiss, Joseph H

    2003-07-02

    Modified atmosphere packaging (MAP) is commercially used to increase the shelf life of packaged produce by reducing the produce respiration rate, delaying senescence, and inhibiting the growth of many spoilage organisms, ultimately increasing product shelf life. MAP systems typically optimize O(2) levels to achieve these effects while preventing anaerobic fermentation but fail to optimize CO(2) concentrations. Altering film permselectivity (i.e., beta, which is the ratio of CO(2)/O(2) permeation coefficients) could be utilized to concurrently optimize levels of both CO(2) and O(2) in MAP systems. We investigated the effect of modifying film permselectivity on the equilibrium gas composition of a model MAP produce system packaged in containers incorporating modified poly(ethylene) ionomer films with CO(2)/O(2) permselectivites between 4-5 and 0.8-1.3. To compare empirical to calculated data of the effect of permselectivity on the equilibrium gas composition of the MAP produce system, a mathematical model commonly used to optimize MAP of respiring produce was applied. The calculated gas composition agreed with observed values, using empirical respiration data from fresh cut apples as a test system and permeability data from tested and theoretical films. The results suggest that packaging films with CO(2)/O(2) permselectivities lower than those commercially available (<3) would further optimize O(2) and CO(2) concentration in MAP of respiring produce, particularly highly respiring and minimally processed produce.

  3. Benchmarking massively parallel architectures

    SciTech Connect

    Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

    1993-01-01

    The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

  4. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  5. Parallel Eigenvalue extraction

    NASA Technical Reports Server (NTRS)

    Akl, Fred A.

    1989-01-01

    A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is utilized in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. Assembly, elimination and back-substitution of degrees of freedom are performed concurrently, using a number of fronts. All fronts converge to and diverge from a predefined global front during elimination and back-substitution, respectively. In the meantime, reduction of the stiffness and mass matrices required by the modified subspace method can be completed during the convergence/divergence cycle and an estimate of the required eigenpairs obtained. Successive cycles of convergence and divergence are repeated until the desired accuracy of calculations is achieved. The advantages of this new algorithm in parallel computer architecture are discussed.

  6. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  7. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  8. Parallel ptychographic reconstruction

    SciTech Connect

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.

  9. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  10. Vacuum-Packaging Technology for IRFPAs

    NASA Astrophysics Data System (ADS)

    Matsumura, Takeshi; Tokuda, Takayuki; Tsutinaga, Akinobu; Kimata, Masafumi; Abe, Hideyuki; Tokashiki, Naotaka

    We developed vacuum-packaging equipment and low-cost vacuum packaging technology for IRFPAs. The equipment is versatile and can process packages with various materials and structures. Getters are activated before vacuum packaging, and we can solder caps/ceramic-packages and caps/windows in a high-vacuum condition using this equipment. We also developed a micro-vacuum gauge to measure pressure in vacuum packages. The micro-vacuum gauge uses the principle of thermal conduction of gases. We use a multi-ceramic package that consists of six packages fabricated on a ceramic sheet, and confirm that the pressure in the processed packages is sufficiently low for high-performance IRFPA.

  11. Package for integrated optic circuit and method

    DOEpatents

    Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.

    1998-08-04

    A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.

  12. Package for integrated optic circuit and method

    DOEpatents

    Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.

    1998-01-01

    A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.

  13. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  14. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  15. Hanford Site radioactive hazardous materials packaging directory

    SciTech Connect

    McCarthy, T.L.

    1995-12-01

    The Hanford Site Radioactive Hazardous Materials Packaging Directory (RHMPD) provides information concerning packagings owned or routinely leased by Westinghouse Hanford Company (WHC) for offsite shipments or onsite transfers of hazardous materials. Specific information is provided for selected packagings including the following: general description; approval documents/specifications (Certificates of Compliance and Safety Analysis Reports for Packaging); technical information (drawing numbers and dimensions); approved contents; areas of operation; and general information. Packaging Operations & Development (PO&D) maintains the RHMPD and may be contacted for additional information or assistance in obtaining referenced documentation or assistance concerning packaging selection, availability, and usage.

  16. Small planar packaging system for high-throughput ATM switching systems

    NASA Astrophysics Data System (ADS)

    Kishimoto, T.; Yasuda, K.; Oka, H.; Kaneko, Y.; Kawauchi, M.

    1995-03-01

    A small planar packaging (SPP) system is described that can be combined with card-on-board (COB) packaging in ATM switching systems with throughputs of over 40 Gbit/s. Using a newly developed quasicoaxial zero-insertion-force connector, point-to-point 311 Mbit/s of 8 bit parallel signal transmission is achieved in an arbitrary location on the SPP system's shelf. Also 5400 I/O connections in the region of the planar packaging system are made, and thus the SPP system eliminates the I/O pin count limitation. Furthermore, the heat flux of the SPP system is five times higher than that of conventional COB packaging because of its air flow control structure.

  17. RAGG - R EPISODIC AGGREGATION PACKAGE

    EPA Science Inventory

    The RAGG package is an R implementation of the CMAQ episodic model aggregation method developed by Constella Group and the Environmental Protection Agency. RAGG is a tool to provide climatological seasonal and annual deposition of sulphur and nitrogen for multimedia management. ...

  18. Food Nanotechnology - Food Packaging Applications

    USDA-ARS?s Scientific Manuscript database

    Astonishing growth in the market for nanofoods is predicted in the future, from the current market of $2.6 billion to $20.4 billion in 2010. The market for nanotechnology in food packaging alone is expected to reach $360 million in 2008. In large part, the impetus for this predicted growth is the ...

  19. ULFEM time series analysis package

    USGS Publications Warehouse

    Karl, Susan M.; McPhee, Darcy K.; Glen, Jonathan M. G.; Klemperer, Simon L.

    2013-01-01

    This manual describes how to use the Ultra-Low-Frequency ElectroMagnetic (ULFEM) software package. Casual users can read the quick-start guide and will probably not need any more information than this. For users who may wish to modify the code, we provide further description of the routines.

  20. Food Nanotechnology: Food Packaging Applications

    USDA-ARS?s Scientific Manuscript database

    Astonishing growth in the market for nanofoods is predicted in the future, from the current market of $2.6 billion to $20.4 billion in 2010. The market for nanotechnology in food packaging alone is expected to reach $360 million in 2008. In large part the impetus for this predicted growth is the e...

  1. COLDMON -- Cold File Analysis Package

    NASA Astrophysics Data System (ADS)

    Rawlinson, D. J.

    The COLDMON package has been written to allow system managers to identify those items of software that are not used (or used infrequently) on their systems. It consists of a few command procedures and a Fortran program to analyze the results. It makes use of the AUDIT facility and security ACLs in VMS.

  2. Pascal Statistical Procedures Package (PSPP).

    DTIC Science & Technology

    1983-12-01

    microcomputer center and as a research tool for users to do a ’ball-park’ analysis of a data base. Included in the package are procedures to handle data base...Issue. . . . . . . . . . . . . . Research Question . . . . a . . . . . . . . . 1 Objectives of the Research . . . . . . . . . . 2 Specific Objectives... Research . . . . . . . 12 Appendix A: (User’s Guide) .............. 14 Introduction . . . . . 17 Data Manipulation Module. . . . . . . . . . . 21

  3. The Macro - Games Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    Part of an Economic Education Series, the course package is designed to teach basic concepts and fundamental principles of macroeconomics and how they can be applied to various world problems. For use with college students, learning is gained through lectures, discussion, simulation games, programmed learning, and text. Time allotment is a 15-week…

  4. A Computerized Petroleum Geology Package.

    ERIC Educational Resources Information Center

    Moser, Louise E.

    1983-01-01

    Describes a package of computer programs developed to implement an oil exploration game that gives undergraduate students practical experience in applying theoretical principles of petroleum geology. The programs facilitate management of the game by the instructor and enhance the learning experience. (Author/MBR)

  5. RAGG - R EPISODIC AGGREGATION PACKAGE

    EPA Science Inventory

    The RAGG package is an R implementation of the CMAQ episodic model aggregation method developed by Constella Group and the Environmental Protection Agency. RAGG is a tool to provide climatological seasonal and annual deposition of sulphur and nitrogen for multimedia management. ...

  6. MagiC: Software Package for Multiscale Modeling.

    PubMed

    Mirzoev, Alexander; Lyubartsev, Alexander P

    2013-03-12

    We present software package MagiC, which is designed to perform systematic structure-based coarse graining of molecular models. The effective pairwise potentials between coarse-grained sites of low-resolution molecular models are constructed to reproduce structural distribution functions obtained from the modeling of the system in a high resolution (atomistic) description. The software supports coarse-grained tabulated intramolecular bond and angle interactions, as well as tabulated nonbonded interactions between different site types in the coarse-grained system, with the treatment of long-range electrostatic forces by the Ewald summation. Two methods of effective potential refinement are implemented: iterative Boltzmann inversion and inverse Monte Carlo, the latter accounting for cross-correlations between pair interactions. MagiC uses its own Metropolis Monte Carlo sampling engine, allowing parallel simulation of many copies of the system with subsequent averaging of the properties, which provides fast convergence of the method with nearly linear scaling at parallel execution.

  7. The Canon package: a fast kernel for tensor manipulators

    NASA Astrophysics Data System (ADS)

    Manssur, L. R. U.; Portugal, R.

    2004-02-01

    This paper describes the Canon package written in the Maple programming language. Canon's purpose is to work as a kernel for complete Maple tensor packages or any Maple package for manipulating indexed objects obeying generic permutation symmetries and possibly having dummy indices. Canon uses Computational Group Theory algorithms to efficiently simplify or manipulate generic tensor expressions. We describe the main command to access the package, give examples, and estimate typical computation timings. Program summaryTitle of program: Canon Catalogue identifier: ADSP Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSP Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: any machine running Maple versions 6 to 9 Operating systems under which the program has been tested: Microsoft Windows, Linux Programming language used: Maple Memory required to execute with typical data: up to 10 Mb No. of bits in word: 32 or 64 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 45 910 Distribution format: tar gzip file Nature of physical problem: Manipulation and simplification of tensor expressions (or any expression in terms of indexed objects) in explicit index notation, where the indices obey generic permutation symmetries and there may exist dummy (summed over) indices. Method of solution: Computational Group Theory algorithms have been used, specially algorithms for finding canonical representations of single and double cosets, and algorithms for creating strong generating sets. Restriction on the complexity of the problem: Computer memory. With current equipment, expressions with hundreds of indices have been manipulated successfully. Typical running time: Simplification of expressions with 15 Riemann tensors was done in less than one minute in a personal computer. Unusual features: The use of Computational Group Theory algorithms

  8. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  9. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  10. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  11. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  12. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  13. Parallel imaging microfluidic cytometer.

    PubMed

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  14. A parallel programming environment supporting multiple data-parallel modules

    SciTech Connect

    Seevers, B.K.; Quinn, M.J. ); Hatcher, P.J. )

    1992-10-01

    We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The progammer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer.

  15. Combinatorial parallel and scientific computing.

    SciTech Connect

    Pinar, Ali; Hendrickson, Bruce Alan

    2005-04-01

    Combinatorial algorithms have long played a pivotal enabling role in many applications of parallel computing. Graph algorithms in particular arise in load balancing, scheduling, mapping and many other aspects of the parallelization of irregular applications. These are still active research areas, mostly due to evolving computational techniques and rapidly changing computational platforms. But the relationship between parallel computing and discrete algorithms is much richer than the mere use of graph algorithms to support the parallelization of traditional scientific computations. Important, emerging areas of science are fundamentally discrete, and they are increasingly reliant on the power of parallel computing. Examples include computational biology, scientific data mining, and network analysis. These applications are changing the relationship between discrete algorithms and parallel computing. In addition to their traditional role as enablers of high performance, combinatorial algorithms are now customers for parallel computing. New parallelization techniques for combinatorial algorithms need to be developed to support these nontraditional scientific approaches. This chapter will describe some of the many areas of intersection between discrete algorithms and parallel scientific computing. Due to space limitations, this chapter is not a comprehensive survey, but rather an introduction to a diverse set of techniques and applications with a particular emphasis on work presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing. Some topics highly relevant to this chapter (e.g. load balancing) are addressed elsewhere in this book, and so we will not discuss them here.

  16. Automated packaging employing real-time vision

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Chung; Wu, Chia-Hung

    2016-07-01

    Existing packaging systems rely on human operation to position a box in the packaging device perform do the packaging task. Current facilities are not capable of handling boxes with different sizes in a flexible way. In order to improve the above-mentioned problems, an eye-to-hand visual servo automated packaging approach is proposed in this paper. The system employs two cameras to observe the box and the gripper mounted on the robotic manipulator to precisely control the manipulator to complete the packaging task. The system first employs two-camera vision to determine the box pose. With appropriate task encoding, a closed-loop visual servoing controller is designed to drive a manipulator to accomplish packaging tasks. The proposed approach can be used to complete automated packaging tasks in the case of uncertain location and size of the box. The system has been successfully validated by experimenting with an industrial robotic manipulator for postal box packaging.

  17. 49 CFR 173.411 - Industrial packagings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...), and Industrial Packaging Type 3 (IP-3). (b) Industrial packaging certification and tests. (1) Each IP... Organization for Standardization document ISO 1496-1: “Series 1 Freight Containers—Specifications and Testing...

  18. 40 CFR 262.30 - Packaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... APPLICABLE TO GENERATORS OF HAZARDOUS WASTE Pre-Transport Requirements § 262.30 Packaging. Before transporting hazardous waste or offering hazardous waste for transportation off-site, a generator must package...

  19. 40 CFR 262.30 - Packaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... APPLICABLE TO GENERATORS OF HAZARDOUS WASTE Pre-Transport Requirements § 262.30 Packaging. Before transporting hazardous waste or offering hazardous waste for transportation off-site, a generator must package...

  20. 40 CFR 262.30 - Packaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... APPLICABLE TO GENERATORS OF HAZARDOUS WASTE Pre-Transport Requirements § 262.30 Packaging. Before transporting hazardous waste or offering hazardous waste for transportation off-site, a generator must package...