Science.gov

Sample records for parallel pcg package

  1. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  2. PCG: A software package for the iterative solution of linear systems on scalar, vector and parallel computers

    SciTech Connect

    Joubert, W.; Carey, G.F.

    1994-12-31

    A great need exists for high performance numerical software libraries transportable across parallel machines. This talk concerns the PCG package, which solves systems of linear equations by iterative methods on parallel computers. The features of the package are discussed, as well as techniques used to obtain high performance as well as transportability across architectures. Representative numerical results are presented for several machines including the Connection Machine CM-5, Intel Paragon and Cray T3D parallel computers.

  3. A parallel PCG solver for MODFLOW.

    PubMed

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree.

  4. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha Anne.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica L.

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  5. Hybrid Optimization Parallel Search PACKage

    SciTech Connect

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.

  6. Parallel Climate Data Assimilation PSAS Package

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Chan, Clara; Gennery, Donald B.; Ferraro, Robert D.

    1996-01-01

    We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512node Intel Paragon. The equation solver achieves a sustained 18 Gflops performance. As the results, we achieved an unprecedented 100-fold solution time reduction on the Intel Paragon parallel platform over the Cray C90. This not only meets and exceeds the DAO time requirements, but also significantly enlarges the window of exploration in climate data assimilations.

  7. High density packaging and interconnect of massively parallel image processors

    NASA Technical Reports Server (NTRS)

    Carson, John C.; Indin, Ronald J.

    1991-01-01

    This paper presents conceptual designs for high density packaging of parallel processing systems. The systems fall into two categories: global memory systems where many processors are packaged into a stack, and distributed memory systems where a single processor and many memory chips are packaged into a stack. Thermal behavior and performance are discussed.

  8. AZTEC: A parallel iterative package for the solving linear systems

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  9. On the performance of a simple parallel implementation of the ILU-PCG for the Poisson equation on irregular domains

    NASA Astrophysics Data System (ADS)

    Gibou, Frédéric; Min, Chohong

    2012-05-01

    We report on the performance of a parallel algorithm for solving the Poisson equation on irregular domains. We use the spatial discretization of Gibou et al. (2002) [6] for the Poisson equation with Dirichlet boundary conditions, while we use a finite volume discretization for imposing Neumann boundary conditions (Ng et al., 2009; Purvis and Burkhalter, 1979) [8,10]. The parallelization algorithm is based on the Cuthill-McKee ordering. Its implementation is straightforward, especially in the case of shared memory machines, and produces significant speedup; about three times on a standard quad core desktop computer and about seven times on a octa core shared memory cluster. The implementation code is posted on the authors' web pages for reference.

  10. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  11. Massively Parallel Post-Packaging for Microelectromechanical Systems (MEMS)

    DTIC Science & Technology

    2003-03-01

    MEMS, Microelectromechanical Systems, Vacuum Packaging , Localized Heating, Localized Bonding, Packaging, Trimming, Resonator, Encapsulation...II: Selective Encapsulation for MEMS Post-Packaging ................................ 19 4.2.1 Vacuum Packaging Technology Using Localized Aluminum...32 4.2.5 Vacuum Packaging Using Localized CVD Deposition

  12. (PCG) Protein Crystal Growth Canavalin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Canavalin. The major storage protein of leguminous plants and a major source of dietary protein for humans and domestic animals. It is studied in efforts to enhance nutritional value of proteins through protein engineerings. It is isolated from Jack Bean because of it's potential as a nutritional substance. Principal Investigator on STS-26 was Alex McPherson.

  13. VisIt: a component based parallel visualization package

    SciTech Connect

    Ahern, S; Bonnell, K; Brugger, E; Childs, H; Meredith, J; Whitlock, B

    2000-12-18

    We are currently developing a component based, parallel visualization and graphical analysis tool for visualizing and analyzing data on two- and three-dimensional (20, 30) meshes. The tool consists of three primary components: a graphical user interface (GUI), a viewer, and a parallel compute engine. The components are designed to be operated in a distributed fashion with the GUI and viewer typically running on a high performance visualization server and the compute engine running on a large parallel platform. The viewer and compute engine are both based on the Visualization Toolkit (VTK), an open source object oriented data manipulation and visualization library. The compute engine will make use of parallel extensions to VTK, based on MPI, developed by Los Alamos National Laboratory in collaboration with the originators of P K . The compute engine will make use of meta-data so that it only operates on the portions of the data necessary to generate the image. The meta-data can either be created as the post-processing data is generated or as a pre-processing step to using VisIt. VisIt will be integrated with the VIEWS' Tera-Scale Browser, which will provide a high performance visual data browsing capability based on multi-resolution techniques.

  14. A C++ Thread Package for Concurrent and Parallel Programming

    SciTech Connect

    Jie Chen; William Watson

    1999-11-01

    Recently thread libraries have become a common entity on various operating systems such as Unix, Windows NT and VxWorks. Those thread libraries offer significant performance enhancement by allowing applications to use multiple threads running either concurrently or in parallel on multiprocessors. However, the incompatibilities between native libraries introduces challenges for those who wish to develop portable applications.

  15. penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE

    SciTech Connect

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  16. DL_POLY_2.0: a general-purpose parallel molecular dynamics simulation package.

    PubMed

    Smith, W; Forester, T R

    1996-06-01

    DL_POLY_2.0 is a general-purpose parallel molecular dynamics simulation package developed at Daresbury Laboratory under the auspices of the Council for the Central Laboratory of the Research Councils. Written to support academic research, it has a wide range of applications and is designed to run on a wide range of computers: from single processor workstations to parallel supercomputers. Its structure, functionality, performance, and availability are described.

  17. Cleanup Verification Package for the 100-F-20, Pacific Northwest Laboratory Parallel Pits

    SciTech Connect

    M. J. Appel

    2007-01-22

    This cleanup verification package documents completion of remedial action for the 100-F-20, Pacific Northwest Laboratory Parallel Pits waste site. This waste site consisted of two earthen trenches thought to have received both radioactive and nonradioactive material related to the 100-F Experimental Animal Farm.

  18. (PCG) Protein Crystal Growth Porcine Elastase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Porcine Elastase. This enzyme is associated with the degradation of lung tissue in people suffering from emphysema. It is useful in studying causes of this disease. Principal Investigator on STS-26 was Charles Bugg.

  19. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  20. parallelMCMCcombine: An R Package for Bayesian Methods for Big Data and Analytics

    PubMed Central

    Miroshnikov, Alexey; Conlon, Erin M.

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field. PMID:25259608

  1. parallelnewhybrid: an R package for the parallelization of hybrid detection using newhybrids.

    PubMed

    Wringe, Brendan F; Stanley, Ryan R E; Jeffery, Nicholas W; Anderson, Eric C; Bradbury, Ian R

    2017-01-01

    Hybridization among populations and species is a central theme in many areas of biology, and the study of hybridization has direct applicability to testing hypotheses about evolution, speciation and genetic recombination, as well as having conservation, legal and regulatory implications. Yet, despite being a topic of considerable interest, the identification of hybrid individuals, and quantification of the (un)certainty surrounding the identifications, remains difficult. Unlike other programs that exist to identify hybrids based on genotypic information, newhybrids is able to assign individuals to specific hybrid classes (e.g. F1 , F2 ) because it makes use of patterns of gene inheritance within each locus, rather than just the proportions of gene inheritance within each individual. For each comparison and set of markers, multiple independent runs of each data set should be used to develop an estimate of the hybrid class assignment accuracy. The necessity of analysing multiple simulated data sets, constructed from large genomewide data sets, presents significant computational challenges. To address these challenges, we present parallelnewhybrid, an r package designed to decrease user burden when undertaking multiple newhybrids analyses. parallelnewhybrid does so by taking advantage of the parallel computational capabilities inherent in modern computers to efficiently and automatically execute separate newhybrids runs in parallel. We show that parallelization of analyses using this package affords users several-fold reductions in time over a traditional serial analysis. parallelnewhybrid consists of an example data set, a readme and three operating system-specific functions to execute parallel newhybrids analyses on each of a computer's c cores. parallelnewhybrid is freely available on the long-term software hosting site github (www.github.com/bwringe/parallelnewhybrid).

  2. (PCG) Protein Crystal Growth Human Serum Albumin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Human Serum Albumin. Contributes to many transport and regulatory processes and has multifunctional binding properties which range from various metals, to fatty acids, hormones, and a wide spectrum of therapeutic drugs. The most abundant protein of the circulatory system. It binds and transports an incredible variety of biological and pharmaceutical ligands throughout the blood stream. Principal Investigator on STS-26 was Larry DeLucas.

  3. (PCG) Protein Crystal Growth Gamma-Interferon

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Gamma-Interferon. Stimulates the body's immune system and is used clinically in the treatment of cancer. Potential as an anti-tumor agent against solid tumors as well as leukemia's and lymphomas. It has additional utility as an anti-ineffective agent, including antiviral, anti-bacterial, and anti-parasitic activities. Principal Investigator on STS-26 was Charles Bugg.

  4. 3-D readout-electronics packaging for high-bandwidth massively paralleled imager

    DOEpatents

    Kwiatkowski, Kris; Lyke, James

    2007-12-18

    Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.

  5. Parallel distributed free-space optoelectronic computer engine using flat plug-on-top optics package

    NASA Astrophysics Data System (ADS)

    Berger, Christoph; Ekman, Jeremy T.; Wang, Xiaoqing; Marchand, Philippe J.; Spaanenburg, Henk; Kiamilev, Fouad E.; Esener, Sadik C.

    2000-05-01

    We report about ongoing work on a free-space optical interconnect system, which will demonstrate a Fast Fourier Transformation calculation, distributed among six processor chips. Logically, the processors are arranged in two linear chains, where each element communicates optically with its nearest neighbors. Physically, the setup consists of a large motherboard, several multi-chip carrier modules, which hold the processor/driver chips and the optoelectronic chips (arrays of lasers and detectors), and several plug-on-top optics modules, which provide the optical links between the chip carrier modules. The system design tries to satisfy numerous constraints, such as compact size, potential for mass-production, suitability for large arrays (up to 1024 parallel channels), compatibility with standard electronics fabrication and packaging technology, potential for active misalignment compensation by integration MEMS technology, and suitability for testing different imaging topologies. We present the system architecture together with details of key components and modules, and report on first experiences with prototype modules of the setup.

  6. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  7. ParallelStructure: a R package to distribute parallel runs of the population genetics program STRUCTURE on multi-core computers.

    PubMed

    Besnier, Francois; Glover, Kevin A

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/.

  8. Improving the performance of cardiac abnormality detection from PCG signal

    NASA Astrophysics Data System (ADS)

    Sujit, N. R.; Kumar, C. Santhosh; Rajesh, C. B.

    2016-03-01

    The Phonocardiogram (PCG) signal contains important information about the condition of heart. Using PCG signal analysis prior recognition of coronary illness can be done. In this work, we developed a biomedical system for the detection of abnormality in heart and methods to enhance the performance of the system using SMOTE and AdaBoost technique have been presented. Time and frequency domain features extracted from the PCG signal is input to the system. The back-end classifier to the system developed is Decision Tree using CART (Classification and Regression Tree), with an overall classification accuracy of 78.33% and sensitivity (alarm accuracy) of 40%. Here sensitivity implies the precision obtained from classifying the abnormal heart sound, which is an essential parameter for a system. We further improve the performance of baseline system using SMOTE and AdaBoost algorithm. The proposed approach outperforms the baseline system by an absolute improvement in overall accuracy of 5% and sensitivity of 44.92%.

  9. A Portable 3D FFT Package for Distributed-Memory Parallel Architectures

    NASA Technical Reports Server (NTRS)

    Ding, H. Q.; Ferraro, R. D.; Gennery, D. B.

    1995-01-01

    A parallel algorithm for 3D FFTs is implemented as a series of local 1D FFTs combined with data transposes. This allows the use of vendor supplied (often fully optimized) sequential 1D FFTs. The FFTs are carried out in-place by using an in-place data transpose across the processors.

  10. Chromosomal Distribution of PcG Proteins during Drosophila Development

    PubMed Central

    Nègre, Nicolas; Hennetin, Jérôme; Sun, Ling V; Lavrov, Sergey; Bellis, Michel; White, Kevin P

    2006-01-01

    Polycomb group (PcG) proteins are able to maintain the memory of silent transcriptional states of homeotic genes throughout development. In Drosophila, they form multimeric complexes that bind to specific DNA regulatory elements named PcG response elements (PREs). To date, few PREs have been identified and the chromosomal distribution of PcG proteins during development is unknown. We used chromatin immunoprecipitation (ChIP) with genomic tiling path microarrays to analyze the binding profile of the PcG proteins Polycomb (PC) and Polyhomeotic (PH) across 10 Mb of euchromatin. We also analyzed the distribution of GAGA factor (GAF), a sequence-specific DNA binding protein that is found at most previously identified PREs. Our data show that PC and PH often bind to clustered regions within large loci that encode transcription factors which play multiple roles in developmental patterning and in the regulation of cell proliferation. GAF co-localizes with PC and PH to a limited extent, suggesting that GAF is not a necessary component of chromatin at PREs. Finally, the chromosome-association profile of PC and PH changes during development, suggesting that the function of these proteins in the regulation of some of their target genes might be more dynamic than previously anticipated. PMID:16613483

  11. Polycomb Group (PcG) Proteins and Human Cancers: Multifaceted Functions and Therapeutic Implications

    PubMed Central

    Wang, Wei; Qin, Jiang-Jiang; Voruganti, Sukesh; Nag, Subhasree; Zhou, Jianwei; Zhang, Ruiwen

    2016-01-01

    Polycomb group (PcG) proteins are transcriptional repressors that regulate several crucial developmental and physiological processes in the cell. More recently, they have been found to play important roles in human carcinogenesis and cancer development and progression. The deregulation and dysfunction of PcG proteins often lead to blocking or inappropriate activation of developmental pathways, enhancing cellular proliferation, inhibiting apoptosis, and increasing the cancer stem cell population. Genetic and molecular investigations of PcG proteins have long been focused on their PcG functions. However, PcG proteins have recently been shown to exert non-polycomb functions, contributing to the regulation of diverse cellular functions. We and others have demonstrated that PcG proteins regulate the expression and function of several oncogenes and tumor suppressor genes in a PcG-independent manner, and PcG proteins are associated with the survival of patients with cancer. In this review, we summarize the recent advances in the research on PcG proteins, including both the polycomb-repressive and non-polycomb functions. We specifically focus on the mechanisms by which PcG proteins play roles in cancer initiation, development, and progression. Finally, we discuss the potential value of PcG proteins as molecular biomarkers for the diagnosis and prognosis of cancer, and as molecular targets for cancer therapy. PMID:26227500

  12. Non-conformal and parallel discontinuous Galerkin time domain method for Maxwell's equations: EM analysis of IC packages

    NASA Astrophysics Data System (ADS)

    Dosopoulos, Stylianos; Zhao, Bo; Lee, Jin-Fa

    2013-04-01

    In this article, we present an Interior Penalty discontinuous Galerkin Time Domain (IPDGTD) method on non-conformal meshes. The motivation for a non-conformal IPDGTD comes from the fact there are applications with very complicated geometries (for example, IC packages) where a conformal mesh may be very difficult to obtain. Therefore, the ability to handle non-conformal meshes really comes in handy. In the proposed approach, we first decompose the computational domain into non-overlapping subdomains. Afterward, each sub-domain is meshed independently resulting in non-conformal domain interfaces, but simultaneously providing great flexibility in the meshing process. The non-conformal triangulations at sub-domain interfaces can be naturally supported within the IPDGTD framework. Moreover, a MPI parallelization together with a local time-stepping strategy is applied to significantly increase the efficiency of the method. Furthermore, a general balancing strategy is described. Through a practical example with multi-scale features, it is shown that the proposed balancing strategy leads to better use of the available computational resources and reduces substantially the total simulation time. Finally, numerical results are included to validate the accuracy and demonstrate the flexibilities of the proposed non-conformal IPDGTD.

  13. Protein Crystal Growth (PCG) experiment aboard mission STS-66

    NASA Technical Reports Server (NTRS)

    2000-01-01

    On the Space Shuttle Orbiter Atlantis' middeck, Astronaut Joseph R. Tarner, mission specialist, works at an area amidst several lockers which support the Protein Crystal Growth (PCG) experiment during the STS-66 mission. This particular section is called the Crystal Observation System, housed in the Thermal Enclosure System (COS/TES). Together with the Vapor Diffusion Apparatus (VDA), housed in Single Locker Thermal Enclosure (SLTES), the COS/TES represents the continuing research into the structure of proteins and other macromolecules such as viruses.

  14. Lamin A/C sustains PcG protein architecture, maintaining transcriptional repression at target genes

    PubMed Central

    Cesarini, Elisa; Mozzetta, Chiara; Marullo, Fabrizia; Gregoretti, Francesco; Gargiulo, Annagiusi; Columbaro, Marta; Cortesi, Alice; Antonelli, Laura; Di Pelino, Simona; Squarzoni, Stefano; Palacios, Daniela; Zippo, Alessio; Bodega, Beatrice; Oliva, Gennaro

    2015-01-01

    Beyond its role in providing structure to the nuclear envelope, lamin A/C is involved in transcriptional regulation. However, its cross talk with epigenetic factors—and how this cross talk influences physiological processes—is still unexplored. Key epigenetic regulators of development and differentiation are the Polycomb group (PcG) of proteins, organized in the nucleus as microscopically visible foci. Here, we show that lamin A/C is evolutionarily required for correct PcG protein nuclear compartmentalization. Confocal microscopy supported by new algorithms for image analysis reveals that lamin A/C knock-down leads to PcG protein foci disassembly and PcG protein dispersion. This causes detachment from chromatin and defects in PcG protein–mediated higher-order structures, thereby leading to impaired PcG protein repressive functions. Using myogenic differentiation as a model, we found that reduced levels of lamin A/C at the onset of differentiation led to an anticipation of the myogenic program because of an alteration of PcG protein–mediated transcriptional repression. Collectively, our results indicate that lamin A/C can modulate transcription through the regulation of PcG protein epigenetic factors. PMID:26553927

  15. Lamin A/C sustains PcG protein architecture, maintaining transcriptional repression at target genes.

    PubMed

    Cesarini, Elisa; Mozzetta, Chiara; Marullo, Fabrizia; Gregoretti, Francesco; Gargiulo, Annagiusi; Columbaro, Marta; Cortesi, Alice; Antonelli, Laura; Di Pelino, Simona; Squarzoni, Stefano; Palacios, Daniela; Zippo, Alessio; Bodega, Beatrice; Oliva, Gennaro; Lanzuolo, Chiara

    2015-11-09

    Beyond its role in providing structure to the nuclear envelope, lamin A/C is involved in transcriptional regulation. However, its cross talk with epigenetic factors--and how this cross talk influences physiological processes--is still unexplored. Key epigenetic regulators of development and differentiation are the Polycomb group (PcG) of proteins, organized in the nucleus as microscopically visible foci. Here, we show that lamin A/C is evolutionarily required for correct PcG protein nuclear compartmentalization. Confocal microscopy supported by new algorithms for image analysis reveals that lamin A/C knock-down leads to PcG protein foci disassembly and PcG protein dispersion. This causes detachment from chromatin and defects in PcG protein-mediated higher-order structures, thereby leading to impaired PcG protein repressive functions. Using myogenic differentiation as a model, we found that reduced levels of lamin A/C at the onset of differentiation led to an anticipation of the myogenic program because of an alteration of PcG protein-mediated transcriptional repression. Collectively, our results indicate that lamin A/C can modulate transcription through the regulation of PcG protein epigenetic factors.

  16. Exclusion of primary congenital glaucoma (PCG) from two candidate regions of chromosomes 1 and 6

    SciTech Connect

    Sarfarazi, M.; Akarsu, A.N.; Barsoum-Homsy, M.

    1994-09-01

    PCG is a genetically heterogeneous condition in which a significant proportion of families inherit in an autosomally recessive fashion. Although association of PCG with chromosomal abnormalities has been repeatedly reported in the literature, the chromosomal location of this condition is still unknown. Therefore, this study is designed to identify the chromosomal location of the PCG locus by positional mapping. We have identified 80 PCG families with a total of 261 potential informative meiosis. A group of 19 pedigrees with a minimum of 2 affected children in each pedigree and consanguinity in most of the parental generation were selected as our initial screening panel. This panel consists of a total of 44 affected and 93 unaffected individuals giving a total of 99 informative meiosis, including 5 phase-known. We used polymerase chain reaction (PCR), denaturing polyacrylamide gels and silver staining to genotype our families. We first screened for markers on 1q21-q31, the reported location for juvenile primary open-angle glaucoma and excluded a region of 30 cM as the likely site for the PCG locus. Association of PCG with both ring chromosome 6 and HLA-B8 has also been reported. Therefore, we genotyped our PCG panel with PCR applicable markers from 6p21. Significant negative lod scores were obtained for D6S105 (Z = -18.70) and D6S306 (Z = -5.99) at {theta}=0.001. HLA class 1 region has also contained one of the tubulin genes (TUBB) which is an obvious candidate for PCG. Study of this gene revealed a significant negative lod score with PCG (Z = -16.74, {theta}=0.001). A multipoint linkage analysis of markers in this and other regions containing the candidate genes will be presented.

  17. Plots, Calculations and Graphics Tools (PCG2). Software Transfer Request Presentation

    NASA Technical Reports Server (NTRS)

    Richardson, Marilou R.

    2010-01-01

    This slide presentation reviews the development of the Plots, Calculations and Graphics Tools (PCG2) system. PCG2 is an easy to use tool that provides a single user interface to view data in a pictorial, tabular or graphical format. It allows the user to view the same display and data in the Control Room, engineering office area, or remote sites. PCG2 supports extensive and regular engineering needs that are both planned and unplanned and it supports the ability to compare, contrast and perform ad hoc data mining over the entire domain of a program's test data.

  18. CtBP Levels Control Intergenic Transcripts, PHO/YY1 DNA Binding, and PcG Recruitment to DNA

    PubMed Central

    Basu, Arindam; Atchison, Michael L.

    2013-01-01

    Carboxy-terminal binding protein (CtBP) is a well-known corepressor of several DNA binding transcription factors in Drosophila as well as in mammals. CtBP is implicated in Polycomb Group (PcG) complex-mediated transcriptional repression because it can bind to some PcG proteins, and mutation of the ctbp gene in flies results in lost PcG protein recruitment to Polycomb Response Elements (PREs) and lost PcG repression. However, the mechanism of reduced PcG DNA binding in CtBP mutant backgrounds is unknown. We show here that in a Drosophila CtBP mutant background, intergenic transcripts are induced across several PRE sequences and this corresponds to reduced DNA binding by PcG proteins Pleiohomeotic (PHO) and Polycomb (Pc), and reduced trimethylation of histone H3 on lysine 27, a hallmark of PcG repression. Restoration of CtBP levels by expression of a CtBP transgene results in repression of intergenic transcripts, restored PcG binding, and elevated trimethylation of H3 on lysine 27. Our results support a model in which CtBP regulates expression of intergenic transcripts that controls DNA binding by PcG proteins and subsequent histone modifications and transcriptional activity. PMID:20082324

  19. PRECONDITIONED CONJUGATE-GRADIENT 2 (PCG2), a computer program for solving ground-water flow equations

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    This report documents PCG2 : a numerical code to be used with the U.S. Geological Survey modular three-dimensional, finite-difference, ground-water flow model . PCG2 uses the preconditioned conjugate-gradient method to solve the equations produced by the model for hydraulic head. Linear or nonlinear flow conditions may be simulated. PCG2 includes two reconditioning options : modified incomplete Cholesky preconditioning, which is efficient on scalar computers; and polynomial preconditioning, which requires less computer storage and, with modifications that depend on the computer used, is most efficient on vector computers . Convergence of the solver is determined using both head-change and residual criteria. Nonlinear problems are solved using Picard iterations. This documentation provides a description of the preconditioned conjugate gradient method and the two preconditioners, detailed instructions for linking PCG2 to the modular model, sample data inputs, a brief description of PCG2, and a FORTRAN listing.

  20. Iterative methods for the WLS state estimation on RISC, vector, and parallel computers

    SciTech Connect

    Nieplocha, J.; Carroll, C.C.

    1993-10-01

    We investigate the suitability and effectiveness of iterative methods for solving the weighted-least-square (WLS) state estimation problem on RISC, vector, and parallel processors. Several of the most popular iterative methods are tested and evaluated. The best performing preconditioned conjugate gradient (PCG) is very well suited for vector and parallel processing as is demonstrated for the WLS state estimation of the IEEE standard test systems. A new sparse matrix format for the gain matrix improves vector performance of the PCG algorithm and makes it competitive to the direct solver. Internal parallelism in RISC processors, used in current multiprocessor systems, can be taken advantage of in an implementation of this algorithm.

  1. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.

    2012-06-01

    BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods

  2. Infrared detection of exposed Carbon Dioxide ice on 67P/CG nucleus surface by Rosetta-VIRTIS

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Raponi, Andrea; Capaccioni, Fabrizio; Barucci, Maria Antonietta; De Sanctis, Maria Cristina; Fornasier, Sonia; Ciarniello, Mauro; Migliorini, Alessandra; Erard, Stephane; Bockelee-Morvan, Dominique; Leyrat, Cedric; Tosi, Federico; Piccioni, Giuseppe; Palomba, Ernesto; Capria, Maria Teresa; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Taylor, Fred W.; Kappel, David

    2016-04-01

    In the period August 2014 - early May 2015 the heliocentric distance of the nucleus of 67P/CG decreased from 3.62 to 1.71 AU and the subsolar point moved towards the southern hemisphere. We investigated the IR spectra obtained by the Rosetta/VIRTIS instrument close to the newly illuminated regions, where colder conditions were present and consequently higher chances to observe highly volatility ices than water. We report about the discovery of CO2 ice identified in a region of the nucleus that recently passed through terminator. The quantitative abundance has been determined by means of spectral modeling of H2O-CO2 icy grains mixed to dark terrains as done in Filacchione et al., Nature, 10.1038/nature16190. The CO2 ice has been identified in an area in Anhur with abundance reaching up to 1.6% mixed with dark terrain. It is interesting to note that CO2 ice has been observed only for a short transient period of time, possibly demonstrating the seasonal nature of the presence of CO2 at the surface. A parallel study on the water and carbon dioxide gaseous emissions in the coma above this volatile-rich area is reported by Migliorini et al., this conference.

  3. The global surface composition of 67P/CG nucleus by Rosetta/VIRTIS. (I) Prelanding mission phase

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; Tosi, Federico; De Sanctis, Maria Cristina; Erard, Stéphane; Morvan, Dominique Bockelée; Leyrat, Cedric; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Piccioni, Giuseppe; Migliorini, Alessandra; Capria, Maria Teresa; Palomba, Ernesto; Cerroni, Priscilla; Longobardo, Andrea; Barucci, Antonella; Fornasier, Sonia; Carlson, Robert W.; Jaumann, Ralf; Stephan, Katrin; Moroz, Lyuba V.; Kappel, David; Rousseau, Batiste; Fonti, Sergio; Mancarella, Francesca; Despan, Daniela; Faure, Mathilde

    2016-08-01

    . The parallel coordinates method (Inselberg [1985] Vis. Comput., 1, 69-91) has been used to identify associations between average values of the spectral indicators and the properties of the geomorphological units as defined by (Thomas et al., [2015] Science, 347, 6220) and (El-Maarr et al., [2015] Astron. Astrophys., 583, A26). Three classes have been identified (smooth/active areas, dust covered areas and depressions), which can be clustered on the basis of the 3.2 μm organic material's band depth, while consolidated terrains show a high variability of the spectral properties resulting being distributed across all three classes. These results show how the spectral variability of the nucleus surface is more variegated than the morphological classes and that 67P/CG surface properties are dynamical, changing with the heliocentric distance and with activity processes.

  4. Cosmochemical implications of CONSERT permittivity characterization of 67P/CG

    NASA Astrophysics Data System (ADS)

    Herique, A.; Kofman, W.; Beck, P.; Bonal, L.; Buttarazzi, I.; Heggy, E.; Lasue, J.; Levasseur-Regourd, A. C.; Quirico, E.; Zine, S.

    2016-11-01

    Analysis of the propagation of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) signal throughout the small lobe of the 67P/CG nucleus has permitted us to deduce the real part of the permittivity, at a value of 1.27 ± 0.05. The first interpretation of this value, using the dielectric properties of mixtures of ices (H2O, CO2), refractories (i.e. dust) and porosity, led to the conclusion that the comet porosity lies in the range 75-85 per cent. In addition, the dust-to-ice ratio was found to range between 0.4 and 2.6 and the permittivity of dust (including 30 per cent porosity) was determined to be lower than 2.9. This last value corresponds to a permittivity lower than 4 for a material without any porosity. This article is intended to refine the dust permittivity estimate by taking into account updated values of the nucleus densities and dust/ice ratio and to provide further insights into the nature of the constituents of comet 67P/CG. We adopted a systematic approach: determination of the dust permittivity as a function of the volume fraction of ice, dust and vacuum (i.e. porosity) and comparison with the permittivity of meteoritic, mineral and organic materials from literature and laboratory measurements. Then different composition models of the nuclei corresponding to cosmochemical end members of 67P/CG dust are tested. For each of these models, the location in the ice/dust/vacuum ternary diagram is calculated based on available dielectric measurements and confronted to the locus of 67P/CG. The number of compliant models is small and the cosmochemical implications of each of them is discussed, to conclude regarding a preferred model.

  5. Cosmochemical implications of CONSERT permittivity characterization of 67P/C-G

    NASA Astrophysics Data System (ADS)

    Levasseur-Regourd, A.; Hérique, Alain; Kofman, Wlodek; Beck, Pierre; Bonal, Lydie; Buttarazzi, Ilaria; Heggy, Essam; Lasue, Jeremie; Quirico, Eric; Zine, Sonia

    2016-10-01

    Unique information about the internal structure of the nucleus of comet 67P/C-G was provided by the CONSERT bistatic radar on-board Rosetta and Philae [1]. Analysis of the propagation of its signal throughout the small lobe indicated that the real part of the permittivity at 90 MHz is of (1.27±0.05). The first interpretation of this value using dielectric properties of mixtures of dust and ices (H2O, CO2), led to the conclusion that the comet porosity ranges between 75-85%. In addition, the dust/ice ratio was found to range between 0.4-2.6 and the permittivity of dust (including 30% of porosity) was determined to be lower than 2.9.The dust permittivity estimate is now reduced by taking into account the updated values of nucleus density and of dust/ice ratio, in order of providing further insights into the nature of the constituents of comet 67P/C-G [2]. We adopt a systematic approach: i) determination of the dust permittivity as a function of the ice (I) to dust (D) and vacuum (V) volume fraction; ii) comparison with the permittivity of meteoritic, mineral and organic materials from literature and laboratory measurements; iii) test of several composition models of the nucleus, corresponding to cosmochemical end members of 67P/C-G. For each of these models the location in the ternary I/D/V diagram is calculated based on available dielectric measurements, and confronted to the locus of 67P/C-G. The number of compliant models is small and the cosmochemical implications of each are discussed [2]. An important fraction of carbonaceous material is required in the dust in order to match CONSERT permittivity observations, establishing that comets represent a massive carbon reservoir.Support from Centre National d'Études Spatiales (CNES, France) for this work, based on observations with CONSERT on board Rosetta, is acknowledged. The CONSERT instrument was designed, built and operated by IPAG, LATMOS and MPS and was financially supported by CNES, CNRS, UJF/UGA, DLR and MPS

  6. PcG Proteins, DNA Methylation, and Gene Repression by Chromatin Looping

    PubMed Central

    Tiwari, Vijay K; McGarvey, Kelly M; Licchesi, Julien D.F; Ohm, Joyce E; Herman, James G; Schübeler, Dirk; Baylin, Stephen B

    2008-01-01

    Many DNA hypermethylated and epigenetically silenced genes in adult cancers are Polycomb group (PcG) marked in embryonic stem (ES) cells. We show that a large region upstream (∼30 kb) of and extending ∼60 kb around one such gene, GATA-4, is organized—in Tera-2 undifferentiated embryonic carcinoma (EC) cells—in a topologically complex multi-loop conformation that is formed by multiple internal long-range contact regions near areas enriched for EZH2, other PcG proteins, and the signature PcG histone mark, H3K27me3. Small interfering RNA (siRNA)–mediated depletion of EZH2 in undifferentiated Tera-2 cells leads to a significant reduction in the frequency of long-range associations at the GATA-4 locus, seemingly dependent on affecting the H3K27me3 enrichments around those chromatin regions, accompanied by a modest increase in GATA-4 transcription. The chromatin loops completely dissolve, accompanied by loss of PcG proteins and H3K27me3 marks, when Tera-2 cells receive differentiation signals which induce a ∼60-fold increase in GATA-4 expression. In colon cancer cells, however, the frequency of the long-range interactions are increased in a setting where GATA-4 has no basal transcription and the loops encompass multiple, abnormally DNA hypermethylated CpG islands, and the methyl-cytosine binding protein MBD2 is localized to these CpG islands, including ones near the gene promoter. Removing DNA methylation through genetic disruption of DNA methyltransferases (DKO cells) leads to loss of MBD2 occupancy and to a decrease in the frequency of long-range contacts, such that these now more resemble those in undifferentiated Tera-2 cells. Our findings reveal unexpected similarities in higher order chromatin conformation between stem/precursor cells and adult cancers. We also provide novel insight that PcG-occupied and H3K27me3-enriched regions can form chromatin loops and physically interact in cis around a single gene in mammalian cells. The loops associate with a

  7. Electronic Packaging Techniques

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A characteristic of aerospace system design is that equipment size and weight must always be kept to a minimum, even in small components such as electronic packages. The dictates of spacecraft design have spawned a number of high-density packaging techniques, among them methods of connecting circuits in printed wiring boards by processes called stitchbond welding and parallel gap welding. These processes help designers compress more components into less space; they also afford weight savings and lower production costs.

  8. Compositional maps of 67P/CG nucleus surface after perihelion passage by Rosetta/VIRTIS

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Ciarniello, M.; Capaccioni, F.; Raponi, A.; De Sanctis, M. C.; Tosi, F.; Migliorini, Alessandra; Piccioni, G.; Cerroni, P.; Capria, M. T.; Erard, S.; Bockelee-Morvan, D.; Leyrat, C.; Arnold, G.; Barucci, M. A.; Schmitt, B.; Quirico, E.

    2016-11-01

    Moving after perihelion passage (August 13th 2015), VIRTIS-M the 0.25-5.0 μm imaging spectrometer on board Rosetta has mapped again the north and equatorial regions of 67P/CG's nucleus with the scope to trace color and composition evolution of the surface. With the loss of the IR channel due to the active cryogenic cooler failure occurred in May 2015, VIRTIS-M has observed only with the VIS channel in the 0.25-1.0 μm spectral range. Despite this limitation, the returned data are valuable in performing a comparison of surface properties between pre and post-perihelion times. Approaching perihelion passage, 67P/CG's nucleus has experienced a general brightening due to the removal of the surficial dust layer caused by the more intense gaseous activity with the consequent exposure of a larger fraction of water ice. Coma observations by VIRTIS during pre-perihelion have shown a correlation between the areas of the nucleus where gaseous activity by water ice sublimation is more intense with the surface brightening caused by dust removal. After having applied data calibration and photometric correction, VIRTIS data are projected on the irregularly shaped digital model6 of 67P/CG with the aim to derive visible albedo and colors maps rendered with a spatial resolution of 0.5×0.5 deg in latitude-longitude, corresponding to a sampling of about 15 m/pixel. Dedicated mapping sequences executed at different heliocentric distances, are employed to follow the dynamical evolution of the surface. Direct comparison between compositional maps obtained at the same heliocentric distances along inbound and outbound orbits allows to evidence the changes occurred on the same areas of the surface. In this context, the first VIRTIS-M maps, obtained in August 2014 at heliocentric distance of 3.4 AU along the inbound orbit with a solar phase angle of about 30-45° are compared with the last ones, taken in June 2016 at 3.2 AU from the Sun on the outbound trajectory at solar phases of about

  9. Drosophila melanogaster dHCF interacts with both PcG and TrxG epigenetic regulators.

    PubMed

    Rodriguez-Jato, Sara; Busturia, Ana; Herr, Winship

    2011-01-01

    Repression and activation of gene transcription involves multiprotein complexes that modify chromatin structure. The integration of these complexes at regulatory sites can be assisted by co-factors that link them to DNA-bound transcriptional regulators. In humans, one such co-factor is the herpes simplex virus host-cell factor 1 (HCF-1), which is implicated in both activation and repression of transcription. We show here that disruption of the gene encoding the Drosophila melanogaster homolog of HCF-1, dHCF, leads to a pleiotropic phenotype involving lethality, sterility, small size, apoptosis, and morphological defects. In Drosophila, repressed and activated transcriptional states of cell fate-determining genes are maintained throughout development by Polycomb Group (PcG) and Trithorax Group (TrxG) genes, respectively. dHCF mutant flies display morphological phenotypes typical of TrxG mutants and dHCF interacts genetically with both PcG and TrxG genes. Thus, dHCF inactivation enhances the mutant phenotypes of the Pc PcG as well as brm and mor TrxG genes, suggesting that dHCF possesses Enhancer of TrxG and PcG (ETP) properties. Additionally, dHCF interacts with the previously established ETP gene skd. These pleiotropic phenotypes are consistent with broad roles for dHCF in both activation and repression of transcription during fly development.

  10. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  11. PCG: A prototype incremental compilation facility for the SAGA environment, appendix F

    NASA Technical Reports Server (NTRS)

    Kimball, Joseph John

    1985-01-01

    A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.

  12. Jpetra Kernel Package

    SciTech Connect

    Heroux, Michael A.

    2004-03-01

    A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs, written in Java. Jpetra is intended to provide the foundation for basic matrix and vector operations for Java developers. Jpetra provides distributed memory operations via an abstract parallel machine interface. The most common implementation of this interface will be Java sockets.

  13. Disruptive collisions as the origin of 67P/C-G and small bilobate comets

    NASA Astrophysics Data System (ADS)

    Michel, Patrick; Schwartz, Stephen R.; Jutzi, Martin; Marchi, Simone; Richardson, Derek C.; Zhang, Yun

    2016-10-01

    Images of comets sent by spacecraft have shown us that bilobate shapes seem to be common in the cometary population. This has been most recently evidenced by the images of comet 67P/C-G obtained by the ESA Rosetta mission, which show a low-density elongated body interpreted as a contact binary. The origin of such bilobate comets has been thought to be primordial because it requires the slow accretion of two bodies that become the two main components of the final object. However, slow accretion does not only occur during the primordial phase of the Solar System, but also later during the reaccumulation processes immediately following collisional disruptions of larger bodies. We perform numerical simulations of disruptions of large bodies. We demonstrate that during the ensuing gravitational phase, in which the generated fragments interact under their mutual gravity, aggregates with bi-lobed or elongated shapes formed form by reaccumulation at speeds that are at or below the range of those assumed in primordial accretion scenarios [1]. The same scenario has been demonstrated to occur in the asteroid belt to explain the origin of asteroid families [2] and has provided insight into the shapes of thus-far observed asteroids such as 25143 Itokawa [3]. Here we show that it is also a more general outcome that applies to disruption events in the outer Solar System. Moreover, we show that high temperature regions are very localized during the impact process, which solves the problem of the survival of organics and volatiles in the collisional process. The advantage of this scenario for the formation of small bilobate shapes, including 67P/C-G, is that it does not necessitate a primordial origin, as such disruptions can occur at later stages of the Solar System. This demonstrates how such comets can be relatively young, consistent with other studies that show that these shapes are unlikely to be formed early on and survive the entire history of the Solar System [4

  14. Growing protein crystals in microgravity - The NASA Microgravity Science and Applications Division (MSAD) Protein Crystal Growth (PCG) program

    NASA Technical Reports Server (NTRS)

    Herren, B.

    1992-01-01

    In collaboration with a medical researcher at the University of Alabama at Birmingham, NASA's Marshall Space Flight Center in Huntsville, Alabama, under the sponsorship of the Microgravity Science and Applications Division (MSAD) at NASA Headquarters, is continuing a series of space experiments in protein crystal growth which could lead to innovative new drugs as well as basic science data on protein molecular structures. From 1985 through 1992, Protein Crystal Growth (PCG) experiments will have been flown on the Space Shuttle a total of 14 times. The first four hand-held experiments were used to test hardware concepts; later flights incorporated these concepts for vapor diffusion protein crystal growth with temperature control. This article provides an overview of the PCG program: its evolution, objectives, and plans for future experiments on NASA's Space Shuttle and Space Station Freedom.

  15. The regulatory role of c-MYC on HDAC2 and PcG expression in human multipotent stem cells.

    PubMed

    Bhandari, Dilli Ram; Seo, Kwang-Won; Jung, Ji-Won; Kim, Hyung-Sik; Yang, Se-Ran; Kang, Kyung-Sun

    2011-07-01

    Myelocytomatosis oncogene (c-MYC) is a well-known nuclear oncoprotein having multiple functions in cell proliferation, apoptosis and cellular transformation. Chromosomal modification is also important to the differentiation and growth of stem cells. Histone deacethylase (HDAC) and polycomb group (PcG) family genes are well-known chromosomal modification genes. The aim of this study was to elucidate the role of c-MYC in the expression of chromosomal modification via the HDAC family genes in human mesenchymal stem cells (hMSCs). To achieve this goal, c-MYC expression was modified by gene knockdown and overexpression via lentivirus vector. Using the modified c-MYC expression, our study was focused on cell proliferation, differentiation and cell cycle. Furthermore, the relationship of c-MYC with HDAC2 and PcG genes was also examined. The cell proliferation and differentiation were checked and shown to be dramatically decreased in c-MYC knocked-down human umbilical cord blood-derived MSCs, whereas they were increased in c-MYC overexpressing cells. Similarly, RT-PCR and Western blotting results revealed that HDAC2 expression was decreased in c-MYC knocked-down and increased in c-MYC overexpressing hMSCs. Database indicates presence of c-MYC binding motif in HDAC2 promoter region, which was confirmed by chromatin immunoprecipitation assay. The influence of c-MYC and HDAC2 on PcG expression was confirmed. This might indicate the regulatory role of c-MYC over HDAC2 and PcG genes. c-MYCs' regulatory role over HDAC2 was also confirmed in human adipose tissue-derived MSCs and bone-marrow derived MSCs. From this finding, it can be concluded that c-MYC plays a vital role in cell proliferation and differentiation via chromosomal modification.

  16. Long-range repression by multiple polycomb group (PcG) proteins targeted by fusion to a defined DNA-binding domain in Drosophila.

    PubMed Central

    Roseman, R R; Morgan, K; Mallin, D R; Roberson, R; Parnell, T J; Bornemann, D J; Simon, J A; Geyer, P K

    2001-01-01

    A tethering assay was developed to study the effects of Polycomb group (PcG) proteins on gene expression in vivo. This system employed the Su(Hw) DNA-binding domain (ZnF) to direct PcG proteins to transposons that carried the white and yellow reporter genes. These reporters constituted naive sensors of PcG effects, as bona fide PcG response elements (PREs) were absent from the constructs. To assess the effects of different genomic environments, reporter transposons integrated at nearly 40 chromosomal sites were analyzed. Three PcG fusion proteins, ZnF-PC, ZnF-SCM, and ZnF-ESC, were studied, since biochemical analyses place these PcG proteins in distinct complexes. Tethered ZnF-PcG proteins repressed white and yellow expression at the majority of sites tested, with each fusion protein displaying a characteristic degree of silencing. Repression by ZnF-PC was stronger than ZnF-SCM, which was stronger than ZnF-ESC, as judged by the percentage of insertion lines affected and the magnitude of the conferred repression. ZnF-PcG repression was more effective at centric and telomeric reporter insertion sites, as compared to euchromatic sites. ZnF-PcG proteins tethered as far as 3.0 kb away from the target promoter produced silencing, indicating that these effects were long range. Repression by ZnF-SCM required a protein interaction domain, the SPM domain, which suggests that this domain is not primarily used to direct SCM to chromosomal loci. This targeting system is useful for studying protein domains and mechanisms involved in PcG repression in vivo. PMID:11333237

  17. Scoring Package

    National Institute of Standards and Technology Data Gateway

    NIST Scoring Package (PC database for purchase)   The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.

  18. Block-bordered diagonalization and parallel iterative solvers

    SciTech Connect

    Alvarado, F.; Dag, H.; Bruggencate, M. ten

    1994-12-31

    One of the most common techniques for enhancing parallelism in direct sparse matrix methods is the reorganization of a matrix into a blocked-bordered structure. Incomplete LDU factorization is a very good preconditioner for PCG in serial environments. However, the inherent sequential nature of the preconditioning step makes it less desirable in parallel environments. This paper explores the use of BBD (Blocked Bordered Diagonalization) in connection with ILU preconditioners. The paper shows that BBD-based ILU preconditioners are quite amenable to parallel processing. Neglecting entries from the entire border can result in a blocked diagonal matrix. The result is a great increase in parallelism at the expense of additional iterations. Experiments on the Sequent Symmetry shared memory machine using (mostly) power system that matrices indicate that the method is generally better than conventional ILU preconditioners and in many cases even better than partitioned inverse preconditioners, without the initial setup disadvantages of partitioned inverse preconditioners.

  19. How primordial is the structure of comet 67P/C-G (and of comets in general)?

    NASA Astrophysics Data System (ADS)

    Morbidelli, Alessandro; Jutzi, Martin; Benz, Willy; Toliou, Anastasia; Rickman, Hans; Bottke, William; Brasser, Ramon

    2016-10-01

    Several properties of the comet 67P-CG suggest that it is a primordial planetesimal. On the other hand, the size-frequency distribution (SFD) of the craters detected by the New Horizons missions at the surface of Pluto and Charon reveal that the SFD of trans-Neptunian objects smaller than 100km in diameter is very similar to that of the asteroid belt. Because the asteroid belt SFD is at collisional equilibrium, this observation suggests that the SFD of the trans-Neptunian population is at collisional equilibrium as well, implying that comet-size bodies should be the product of collisional fragmentation and not primordial objects. To test whether comet 67P-CG could be a (possibly lucky) survivor of the original population, we conducted a series of numerical impact experiments, where an object with the shape and the density of 67P-CG, and material strength varying from 10 to 1,000 Pa, is hit on the "head" by a 100m projectile at different speeds. From these experiments we derive the impact energy required to disrupt the body catastrophically, or destroy its bi-lobed shape, as a function of impact speed. Next, we consider a dynamical model where the original trans-Neptunian disk is dispersed during a phase of temporary dynamical instability of the giant planets, which successfully reproduces the scattered disk and Oort cloud populations inferred from the current fluxes of Jupiter-family and long period comets. We find that, if the dynamical dispersal of the disk occurs late, as in the Late Heavy Bombardment hypothesis, a 67P-CG-like body has a negligible probability to avoid all catastrophic collisions. During this phase, however, the collisional equilibrium SFD measured by the New Horizons mission can be established. Instead, if the dispersal of the disk occurred as soon as gas was removed, a 67P-CG-like body has about a 20% chance to avoid catastrophic collisions. Nevertheless it would still undergo 10s of reshaping collisions. We estimate that, statistically, the

  20. Monitoring Comet 67P/C-G Micrometer Dust Flux: GIADA onboard Rosetta.

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Sordini, Roberto; Lucarelli, Francesca; Zakharov, Vladimir; Fulle, Marco; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    (21)ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna The MicroBalance System (MBS) is one of the three measurement subsystems of GIADA, the Grain Impact Analyzer and Dust Accumulator on board the Rosetta/ESA spacecraft (S/C). It consists of five Quartz Crystal Microbalances (QCMs) in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. The MBS is continuously monitoring comet 67P/CG since the beginning of May 2014. During the first 4 months of measurements, before the insertion of the S/C in the bound orbit phase, there were no evidences of dust accumulation on the QCMs. Starting from the beginning of October, three out of five QCMs measured an increase of the deposited dust. The measured fluxes show, as expected, a strong anisotropy. In particular, the dust flux appears to be much higher from the Sun direction with respect to the comet direction. Acknowledgment: GIADA was built by a consortum led by the Univ. Napoli "Parthenope" & INAF- Oss. Astr. Capodimonte, in collaboration with the Inst. de Astrofisica de Andalucia, Selex-ES, FI and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with the support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developed from a PI proposal from the University of Kent; sci. & tech. contribution were provided by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project/ESTEC for their out-standing work. Science support provided was by NASA through the US Rosetta Project managed by the Jet Propulsion Laboratory/ California Institute of Technology. GIADA calibrated data will be available through ESA's PSA web site (www.rssd.esa.int/index.php? project=PSA&page=in dex). We would like to thank Angioletta

  1. Enhanced growth of endothelial precursor cells on PCG-matrix facilitates accelerated, fibrosis-free, wound healing: a diabetic mouse model.

    PubMed

    Kanitkar, Meghana; Jaiswal, Amit; Deshpande, Rucha; Bellare, Jayesh; Kale, Vaijayanti P

    2013-01-01

    Diabetes mellitus (DM)-induced endothelial progenitor cell (EPC) dysfunction causes impaired wound healing, which can be rescued by delivery of large numbers of 'normal' EPCs onto such wounds. The principal challenges herein are (a) the high number of EPCs required and (b) their sustained delivery onto the wounds. Most of the currently available scaffolds either serve as passive devices for cellular delivery or allow adherence and proliferation, but not both. This clearly indicates that matrices possessing both attributes are 'the need of the day' for efficient healing of diabetic wounds. Therefore, we developed a system that not only allows selective enrichment and expansion of EPCs, but also efficiently delivers them onto the wounds. Murine bone marrow-derived mononuclear cells (MNCs) were seeded onto a PolyCaprolactone-Gelatin (PCG) nano-fiber matrix that offers a combined advantage of strength, biocompatibility wettability; and cultured them in EGM2 to allow EPC growth. The efficacy of the PCG matrix in supporting the EPC growth and delivery was assessed by various in vitro parameters. Its efficacy in diabetic wound healing was assessed by a topical application of the PCG-EPCs onto diabetic wounds. The PCG matrix promoted a high-level attachment of EPCs and enhanced their growth, colony formation, and proliferation without compromising their viability as compared to Poly L-lactic acid (PLLA) and Vitronectin (VN), the matrix and non-matrix controls respectively. The PCG-matrix also allowed a sustained chemotactic migration of EPCs in vitro. The matrix-effected sustained delivery of EPCs onto the diabetic wounds resulted in an enhanced fibrosis-free wound healing as compared to the controls. Our data, thus, highlight the novel therapeutic potential of PCG-EPCs as a combined 'growth and delivery system' to achieve an accelerated fibrosis-free healing of dermal lesions, including diabetic wounds.

  2. The impact of Polycomb group (PcG) and Trithorax group (TrxG) epigenetic factors in plant plasticity.

    PubMed

    de la Paz Sanchez, Maria; Aceves-García, Pamela; Petrone, Emilio; Steckenborn, Stefan; Vega-León, Rosario; Álvarez-Buylla, Elena R; Garay-Arroyo, Adriana; García-Ponce, Berenice

    2015-11-01

    Current advances indicate that epigenetic mechanisms play important roles in the regulatory networks involved in plant developmental responses to environmental conditions. Hence, understanding the role of such components becomes crucial to understanding the mechanisms underlying the plasticity and variability of plant traits, and thus the ecology and evolution of plant development. We now know that important components of phenotypic variation may result from heritable and reversible epigenetic mechanisms without genetic alterations. The epigenetic factors Polycomb group (PcG) and Trithorax group (TrxG) are involved in developmental processes that respond to environmental signals, playing important roles in plant plasticity. In this review, we discuss current knowledge of TrxG and PcG functions in different developmental processes in response to internal and environmental cues and we also integrate the emerging evidence concerning their function in plant plasticity. Many such plastic responses rely on meristematic cell behavior, including stem cell niche maintenance, cellular reprogramming, flowering and dormancy as well as stress memory. This information will help to determine how to integrate the role of epigenetic regulation into models of gene regulatory networks, which have mostly included transcriptional interactions underlying various aspects of plant development and its plastic response to environmental conditions.

  3. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  4. Water and Carbon Dioxide Ices-Rich Areas on Comet 67P/CG Nucleus Surface

    NASA Astrophysics Data System (ADS)

    Filacchione, G.; Capaccioni, F.; Raponi, A.; De Sanctis, M. C.; Ciarniello, M.; Barucci, M. A.; Tosi, F.; Migliorini, A.; Capria, M. T.; Erard, S.; Bockelée-Morvan, D.; Leyrat, C.; Arnold, G.; Kappel, D.; McCord, T. B.

    2017-01-01

    fields ice grains [3]; 3) different combinations of water ice and dark terrain in intimate mixing with small grains (tens of microns) or in areal mixing with large grains (mm- sized) are seen on the eight bright areas discussed in [4]; 4) the CO2 ice in the Anhur region appears grouped in areal patches made of 50 μm sized grains [5]. While the spectroscopic identification of water and carbon dioxide ices is made by means of diagnostic infrared absorption features, their presence cause significant effects also at visible wavelengths, including the increase of the albedo and the reduction of the spectral slope which results in a more blue color [9,10]. In summary, thermodynamic conditions prevailing on the 67P/CG nucleus surface allow the presence of only H2O and CO2 ices. Similar properties are probably common among other Jupiter family comets.

  5. GIADA on-board Rosetta: comet 67P/C-G dust coma characterization

    NASA Astrophysics Data System (ADS)

    Rotundi, Alessandra; Della Corte, Vincenzo; Fulle, Marco; Sordini, Roberto; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Lucarelli, Francesca; Zakharov, Vladimir; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    21ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna GIADA consists of three subsystems: 1) the Grain Detection System (GDS) to detect dust grains as they pass through a laser curtain, 2) the Impact Sensor (IS) to measure grain momentum derived from the impact on a plate connected to five piezoelectric sensors, and 3) the MicroBalances System (MBS); five quartz crystal microbalances in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. GDS provides data on grain speed and its optical cross section. The IS grain momentum measurement, when combined with the GDS detection time, provides a direct measurement of grain speed and mass. These combined measurements characterize single grain dust dynamics in the coma of 67P/CG. No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such high heliocentric distances are available up to date. We present here the results obtained by GIADA, which began operating in continuous mode on 18 July 2014 when the comet was at a heliocentric distance of 3.7 AU. The first grain detection occurred when the spacecraft was 814 km from the nucleus on 1 August 2014. From August the 1st up to December the 11th, GIADA detected more than 800 grains, for which the 3D spatial distribution was determined. About 700 out of 800 are GDS only detections: "dust clouds", i.e. slow dust grains (≈ 0.5 m/s) crossing the laser curtain very close in time (e.g. 129 grains in 11 s), probably fluffy grains. IS only detections are about 70, i.e. ≈ 1/10 of the GDS only. This ratio is quite different from what we got for the early detections (August - September) when the ration was ≈ 3, suggesting the presence of different types of particle (bigger, brighter, less dense).The combined GDS+IS detections, i.e. measured by both the GDS and IS detectors, are about 70 and allowed us to extract the

  6. Structural basis of DNA recognition by PCG2 reveals a novel DNA binding mode for winged helix-turn-helix domains

    PubMed Central

    Liu, Junfeng; Huang, Jinguang; Zhao, Yanxiang; Liu, Huaian; Wang, Dawei; Yang, Jun; Zhao, Wensheng; Taylor, Ian A.; Peng, You-Liang

    2015-01-01

    The MBP1 family proteins are the DNA binding subunits of MBF cell-cycle transcription factor complexes and contain an N terminal winged helix-turn-helix (wHTH) DNA binding domain (DBD). Although the DNA binding mechanism of MBP1 from Saccharomyces cerevisiae has been extensively studied, the structural framework and the DNA binding mode of other MBP1 family proteins remains to be disclosed. Here, we determined the crystal structure of the DBD of PCG2, the Magnaporthe oryzae orthologue of MBP1, bound to MCB–DNA. The structure revealed that the wing, the 20-loop, helix A and helix B in PCG2–DBD are important elements for DNA binding. Unlike previously characterized wHTH proteins, PCG2–DBD utilizes the wing and helix-B to bind the minor groove and the major groove of the MCB–DNA whilst the 20-loop and helix A interact non-specifically with DNA. Notably, two glutamines Q89 and Q82 within the wing were found to recognize the MCB core CGCG sequence through making hydrogen bond interactions. Further in vitro assays confirmed essential roles of Q89 and Q82 in the DNA binding. These data together indicate that the MBP1 homologue PCG2 employs an unusual mode of binding to target DNA and demonstrate the versatility of wHTH domains. PMID:25550425

  7. Parametric modelling of cardiac system multiple measurement signals: an open-source computer framework for performance evaluation of ECG, PCG and ABP event detectors.

    PubMed

    Homaeinezhad, M R; Sabetian, P; Feizollahi, A; Ghaffari, A; Rahmani, R

    2012-02-01

    The major focus of this study is to present a performance accuracy assessment framework based on mathematical modelling of cardiac system multiple measurement signals. Three mathematical algebraic subroutines with simple structural functions for synthetic generation of the synchronously triggered electrocardiogram (ECG), phonocardiogram (PCG) and arterial blood pressure (ABP) signals are described. In the case of ECG signals, normal and abnormal PQRST cycles in complicated conditions such as fascicular ventricular tachycardia, rate dependent conduction block and acute Q-wave infarctions of inferior and anterolateral walls can be simulated. Also, continuous ABP waveform with corresponding individual events such as systolic, diastolic and dicrotic pressures with normal or abnormal morphologies can be generated by another part of the model. In addition, the mathematical synthetic PCG framework is able to generate the S4-S1-S2-S3 cycles in normal and in cardiac disorder conditions such as stenosis, insufficiency, regurgitation and gallop. In the PCG model, the amplitude and frequency content (5-700 Hz) of each sound and variation patterns can be specified. The three proposed models were implemented to generate artificial signals with varies abnormality types and signal-to-noise ratios (SNR), for quantitative detection-delineation performance assessment of several ECG, PCG and ABP individual event detectors designed based on the Hilbert transform, discrete wavelet transform, geometric features such as area curve length (ACLM), the multiple higher order moments (MHOM) metric, and the principal components analysed geometric index (PCAGI). For each method the detection-delineation operating characteristics were obtained automatically in terms of sensitivity, positive predictivity and delineation (segmentation) error rms and checked by the cardiologist. The Matlab m-file script of the synthetic ECG, ABP and PCG signal generators are available in the Appendix.

  8. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2009-05-27

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-Gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing Shielded Container Payload Assembly; 1.7, Preparing SWB Payload Assembly; and 1.8, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence, except as noted.

  9. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2008-09-11

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing SWB Payload Assembly; and 1.7, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence.

  10. Seafood Packaging

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with a New Orleans seafood packaging company to develop a container to improve the shipping longevity of seafood, primarily frozen and fresh fish, while preserving the taste. A NASA engineer developed metalized heat resistant polybags with thermal foam liners using an enhanced version of the metalized mylar commonly known as 'space blanket material,' which was produced during the Apollo era.

  11. Packaged Food

    NASA Technical Reports Server (NTRS)

    1976-01-01

    After studies found that many elderly persons don't eat adequately because they can't afford to, they have limited mobility, or they just don't bother, Innovated Foods, Inc. and JSC developed shelf-stable foods processed and packaged for home preparation with minimum effort. Various food-processing techniques and delivery systems are under study and freeze dried foods originally used for space flight are being marketed. (See 77N76140)

  12. Reflective Packaging

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The aluminized polymer film used in spacecraft as a radiation barrier to protect both astronauts and delicate instruments has led to a number of spinoff applications. Among them are aluminized shipping bags, food cart covers and medical bags. Radiant Technologies purchases component materials and assembles a barrier made of layers of aluminized foil. The packaging reflects outside heat away from the product inside the container. The company is developing new aluminized lines, express mailers, large shipping bags, gel packs and insulated panels for the building industry.

  13. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  14. Dust Impact Monitor DIM Onboard Philae: Measurements at Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Krüger, Harald; Albin, Thomas; Apathy, Istvan; Arnold, Walter; Flandes, Alberto; Fischer, Hans-Herbert; Hirn, Attila; Loose, Alexander; Peter, Attila; Seidensticker, Klaus J.; Sperl, Matthias

    2015-04-01

    The Rosetta lander Philae landed successfully on the nucleus surface of comet 67P/Churyumov-Gerasimenko on 12 November 2014. Philae is equipped with the Dust Impact Monitor (DIM) which is part of the SESAME experiment package onboard. DIM employs piezoelectric PZT sensors to detect impacts by sub-millimetre and millimetre-sized ice and dust particles that are emitted from the nucleus and transported into the cometary coma. DIM was operated during Philae's descent to its nominal landing site at 4 different altitudes above the comet surface, and at Philae's final landing site. During descent to the nominal landing site, DIM measured the impact of one rather big particle that probably had a size of a few millimeters. No impacts were detected at the final landing site which may be due to low cometary activity or due to shadowing from obstacles close to Philae, or both. We will present the results from our measurements at the comet and compare them with laboratory calibration experiments with ice/dust particles performed with a DIM flight spare sensor.

  15. Packaging Your Training Materials

    ERIC Educational Resources Information Center

    Espeland, Pamela

    1977-01-01

    The types of packaging and packaging materials to use for training materials should be determined during the planning of the training programs, according to the packaging market. Five steps to follow in shopping for packaging are presented, along with a list of packaging manufacturers. (MF)

  16. Application of Russian Thermo-Electric Devices (TEDS) for the US Microgravity Program Protein Crystal Growth (PCG) Project

    NASA Technical Reports Server (NTRS)

    Aksamentov, Valery

    1996-01-01

    Changes in the former Soviet Union have opened the gate for the exchange of new technology. Interest in this work has been particularly related to Thermal Electric Cooling Devices (TED's) which have an application for the Thermal Enclosure System (TES) developed by NASA. Preliminary information received by NASA/MSFC indicates that Russian TED's have higher efficiency. Based on that assumption NASA/MSFC awarded a contract to the University of Alabama in Huntsville (UAH) in order to study the Russian TED's technology. In order to fulfill this a few steps should be made: (1) potential specifications and configurations should be defined for use of TED's in Protein Crystal Growing (PCG) thermal control hardware; and (2) work closely with the identified Russian source to define and identify potential Russian TED's to exceed the performance of available domestic TED's. Based on the data from Russia, it is possible to make plans for further steps such as buying and testing high performance TED's. To accomplish this goal two subcontracts have been released. One subcontract to Automated Sciences Group (ASG) located in Huntsville, AL and one to the International Center for Advanced Studies 'Cosmos' located in Moscow, Russia.

  17. Science packages

    NASA Astrophysics Data System (ADS)

    1997-01-01

    Primary science teachers in Scotland have a new updating method at their disposal with the launch of a package of CDi (Compact Discs Interactive) materials developed by the BBC and the Scottish Office. These were a response to the claim that many primary teachers felt they had been inadequately trained in science and lacked the confidence to teach it properly. Consequently they felt the need for more in-service training to equip them with the personal understanding required. The pack contains five disks and a printed user's guide divided up as follows: disk 1 Investigations; disk 2 Developing understanding; disks 3,4,5 Primary Science staff development videos. It was produced by the Scottish Interactive Technology Centre (Moray House Institute) and is available from BBC Education at £149.99 including VAT. Free Internet distribution of science education materials has also begun as part of the Global Schoolhouse (GSH) scheme. The US National Science Teachers' Association (NSTA) and Microsoft Corporation are making available field-tested comprehensive curriculum material including 'Micro-units' on more than 80 topics in biology, chemistry, earth and space science and physics. The latter are the work of the Scope, Sequence and Coordination of High School Science project, which can be found at http://www.gsh.org/NSTA_SSandC/. More information on NSTA can be obtained from its Web site at http://www.nsta.org.

  18. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  19. Global and Spatially Resolved Photometric Properties of the Nucleus of Comet 67P/C-G from OSIRIS Images

    NASA Astrophysics Data System (ADS)

    Lamy, P.

    2014-04-01

    Following the successful wake-up of the ROSETTA spacecraft on 20 January 2014, the OSIRIS imaging system was fully re-commissioned at the end of March 2014 confirming its initial excellent performances. The OSIRIS instrument includes two cameras: the Narrow Angle Camera (NAC) and the Wide Angle Camera (WAC) with respective fieldsofview of 2.2° and 12°, both equipped with 2K by 2K CCD detectors and dual filter wheels. The NAC filters allow a spectral coverage of 270 to 990 nm tailored to the investigation of the mineralogical composition of the nucleus of comet P/Churyumov- Gerasimenko whereas those of the WAC (245-632 nm) aim at characterizing its coma [1]. The NAC has already secured a set of four complete light curves of the nucleus of 67P/C-G between 3 March and 24 April 2014 with a primary purpose of characterizing its rotational state. A preliminary spin period of 12.4 hours has been obtained, similar to its very first determination from a light curve obtained in 2003 with the Hubble space telescope [2]. The NAC and WAC will be recalibrated in the forthcoming weeks using the same stellar calibrators VEGA and the solar analog 16 Cyg B as for past inflight calibration campaigns in support of the flybys of asteroids Steins and Lutetia. This will allow comparing the pre- and post-hibernation performances of the cameras and correct the quantum efficiency response of the two CCD and the throughput for all channels (i.e., filters) if required. The accurate photometric analysis of the images requires utmost care due to several instrumental problems, the most severe and complex to handle being the presence of optical ghosts which result from multiple reflections on the two filters inserted in the optical beam and on the thick window which protects the CCD detector from cosmic ray impacts. These ghosts prominently appear as either slightly defocused images offset from the primary images or large round or elliptical halos. We will first present results on the global

  20. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  1. Rosetta/VIRTIS-M spectral data: Comet 67P/CG compared to other primitive small bodies.

    NASA Astrophysics Data System (ADS)

    De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Erard, S.; Tosi, F.; Ciarniello, M.; Raponi, A.; Piccioni, G.; Leyrat, C.; Bockelée-Morvan, D.; Drossart, P.; Fornasier, S.

    2014-12-01

    VIRTIS-M, the Visible InfraRed Thermal Imaging Spectrometer, onboard the Rosetta Mission orbiter (Coradini et al., 2007) acquired data of the comet 67P/Churyumov-Gerasimenko in the 0.25-5.1 µm spectral range. The initial data, obtained during the first mission phases to the comet, allow us to derive albedo and global spectral properties of the comet nucleus as well as spectra of different areas on the nucleus. The characterization of cometary nuclei surfaces and their comparison with those of related populations such as extinct comet candidates, Centaurs, near-Earth asteroids (NEAs), trans-Neptunian objects (TNOs), and primitive asteroids is critical to understanding the origin and evolution of small solar system bodies. The acquired VIRTIS data are used to compare the global spectral properties of comet 67P/CG to published spectra of other cometary nuclei observed from ground or visited by space mission. Moreover, the spectra of 67P/Churyumov-Gerasimenko are also compared to those of primitive asteroids and centaurs. The comparison can give us clues on the possible common formation and evolutionary environment for primitive asteroids, centaurs and Jupiter-family comets. Authors acknowledge the funding from Italian and French Space Agencies. References: Coradini, A., Capaccioni, F., Drossart, P., Arnold, G., Ammannito, E., Angrilli, F., Barucci, A., Bellucci, G., Benkhoff, J., Bianchini, G., Bibring, J. P., Blecka, M., Bockelee-Morvan, D., Capria, M. T., Carlson, R., Carsenty, U., Cerroni, P., Colangeli, L., Combes, M., Combi, M., Crovisier, J., De Sanctis, M. C., Encrenaz, E. T., Erard, S., Federico, C., Filacchione, G., Fink, U., Fonti, S., Formisano, V., Ip, W. H., Jaumann, R., Kuehrt, E., Langevin, Y., Magni, G., McCord, T., Mennella, V., Mottola, S., Neukum, G., Palumbo, P., Piccioni, G., Rauer, H., Saggin, B., Schmitt, B., Tiphene, D., Tozzi, G., Space Science Reviews, Volume 128, Issue 1-4, 529-559, 2007.

  2. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  3. A parallel implementation of an EBE solver for the finite element method

    SciTech Connect

    Silva, R.P.; Las Casas, E.B.; Carvalho, M.L.B.

    1994-12-31

    A parallel implementation using PVM on a cluster of workstations of an Element By Element (EBE) solver using the Preconditioned Conjugate Gradient (PCG) method is described, along with an application in the solution of the linear systems generated from finite element analysis of a problem in three dimensional linear elasticity. The PVM (Parallel Virtual Machine) system, developed at the Oak Ridge Laboratory, allows the construction of a parallel MIMD machine by connecting heterogeneous computers linked through a network. In this implementation, version 3.1 of PVM is used, and 11 SLC Sun workstations and a Sun SPARC-2 model are connected through Ethernet. The finite element program is based on SDP, System for Finite Element Based Software Development, developed at the Brazilian National Laboratory for Scientific Computation (LNCC). SDP provides the basic routines for a finite element application program, as well as a standard for programming and documentation, intended to allow exchanges between research groups in different centers.

  4. Reflectance spectroscopy of natural organic solids, iron sulfides and their mixtures as refractory analogues for Rosetta/VIRTIS' surface composition analysis of 67P/CG

    NASA Astrophysics Data System (ADS)

    Moroz, Lyuba V.; Markus, Kathrin; Arnold, Gabriele; Henckel, Daniela; Kappel, David; Schade, Ulrich; Rousseau, Batiste; Quirico, Eric; Schmitt, Bernard; Capaccioni, Fabrizio; Bockelee-Morvan, Dominique; Filacchione, Gianrico; Érard, Stéphane; Leyrat, Cedric; VIRTIS Team

    2016-10-01

    Analysis of 0.25-5 µm reflectance spectra provided by the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) onboard Rosetta orbiter revealed that the surface of 67P/CG is dark from the near-UV to the IR and is enriched in refractory phases such as organic and opaque components. The broadness and complexity of the ubiquitous absorption feature around 3.2 µm suggest a variety of cometary organic constituents. For example, complex hydrocarbons (aliphatic and polycyclic aromatic) can contribute to the feature between 3.3 and 3.5 µm and to the low reflectance of the surface in the visible. Here we present the 0.25-5 µm reflectance spectra of well-characterized terrestrial hydrocarbon materials (solid oil bitumens, coals) and discuss their relevance as spectral analogues for a hydrocarbon part of 67P/CG's complex organics. However, the expected low degree of thermal processing of cometary hydrocarbons (high (H+O+N+S)/C ratios and low carbon aromaticities) suggests high IR reflectance, intense 3.3-3.5 µm absorption bands and steep red IR slopes that are not observed in the VIRTIS spectra. Fine-grained opaque refractory phases (e.g., iron sulfides, Fe-Ni alloys) intimately mixed with other surface components are likely responsible for the low IR reflectance and low intensities of absorption bands in the VIRTIS spectra of the 67P/CG surface. In particular, iron sulfides are common constituents of cometary dust, "cometary" chondritic IDPs, and efficient darkening agents in primitive carbonaceous chondrites. Their effect on reflectance spectra of an intimate mixture is strongly affected by grain size. We report and discuss the 0.25-5 µm reflectance spectra of iron sulfides (meteoritic troilite and several terrestrial pyrrhotites) ground and sieved to various particle sizes. In addition, we present reflectance spectra of several intimate mixtures of powdered iron sulfides and solid oil bitumens. Based on the reported laboratory data, we discuss the ability of

  5. Packaging for Food Service

    NASA Technical Reports Server (NTRS)

    Stilwell, E. J.

    1985-01-01

    Most of the key areas of concern in packaging the three principle food forms for the space station were covered. It can be generally concluded that there are no significant voids in packaging materials availability or in current packaging technology. However, it must also be concluded that the process by which packaging decisions are made for the space station feeding program will be very synergistic. Packaging selection will depend heavily on the preparation mechanics, the preferred presentation and the achievable disposal systems. It will be important that packaging be considered as an integral part of each decision as these systems are developed.

  6. Packaging of MEMS microphones

    NASA Astrophysics Data System (ADS)

    Feiertag, Gregor; Winter, Matthias; Leidl, Anton

    2009-05-01

    To miniaturize MEMS microphones we have developed a microphone package using flip chip technology instead of chip and wire bonding. In this new packaging technology MEMS and ASIC are flip chip bonded on a ceramic substrate. The package is sealed by a laminated polymer foil and by a metal layer. The sound port is on the bottom side in the ceramic substrate. In this paper the packaging technology is explained in detail and results of electro-acoustic characterization and reliability testing are presented. We will also explain the way which has led us from the packaging of Surface Acoustic Wave (SAW) components to the packaging of MEMS microphones.

  7. Insights gained from Data Measured by the CONSERT Instrument during Philae's Descent onto 67P/C-G's surface

    NASA Astrophysics Data System (ADS)

    Plettemeier, Dirk; Statz, Christoph; Abraham, Jens; Ciarletti, Valerie; Hahnel, Ronny; Hegler, Sebastian; Herique, Alain; Pasquero, Pierre; Rogez, Yves; Zine, Sonia; Kofman, Wlodek

    2015-04-01

    The scientific objective of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard ESA spacecraft Rosetta is to perform a dielectric characterization of comet 67P/Chuyurmov-Gerasimenko's nucleus. This is done by means of a bi-static sounding between the lander Philae launched onto the comet's surface and the orbiter Rosetta. For the sounding, the CONSERT unit aboard the lander will receive and process the radio signal emitted by the orbiter counterpart of the instrument. It will then retransmit a signal back to the orbiter to be received by CONSERT. This happens at the milliseconds time scale. During the descent of lander Philae onto the comet's surface, CONSERT was operated as a bi-static RADAR. A single measurement of the obtained data is composed of the dominant signal from the direct line-of-sight propagation path between lander and orbiter as well as paths from the lander's signal being reflected by the comet's surface. From peak power measurements of the dominant direct path during the descent, the knowledge of the orbiter and lander positions and simulations of CONSERT's orbiter and lander antenna characteristics as well as polarization properties, we were able to reconstruct the lander's attitude and estimate the spin rate of the lander along the descent trajectory. Additionally, certain operations and manoeuvres of orbiter and lander, e.g. the deployment of the lander legs and CONSERT antennas or the orbiter change of attitude in order to orient the science towards the assumed lander position, are also visible in the data. The information gained on the landers attitude is used in the reconstruction of the dielectric properties of 67P/C-G's surface and near subsurface (metric to decametric scale) and will hopefully prove helpful supporting the data interpretation of other instruments. In the CONSERT measurements, the comet's surface is visible during roughly the last third of the descent enabling a mean permittivity estimation of

  8. Search for regional variations of thermal and electrical properties of comet 67P/CG probed by MIRO/Rosetta

    NASA Astrophysics Data System (ADS)

    Leyrat, Cedric; Blain, Doriann; Lellouch, Emmanuel; von Allmen, Paul; Keihm, Stephen; Choukroun, Matthieu; Schloerb, Pete; Biver, Nicolas; Gulkis, Samuel; Hofstadter, Mark

    2015-11-01

    Since June 2014, The MIRO (Microwave Instrument for Rosetta Orbiter) on board the Rosetta (ESA) spacecraft observes comet 67P-CG along its heliocentric orbit from 3.25 AU to 1.24 AU. MIRO operates in millimeter and submillimeter wavelengths respectively at 190 GHz (1.56 mm) and 562 GHz (0.5 mm). While the submillimeter channel is coupled to a Chirp Transform Spectrometer (CTS) for spectroscopic analysis of the coma, both bands provide a broad-band continuum channel for sensing the thermal emission of the nucleus itself.Continuum measurements of the nucleus probe the subsurface thermal emission from two different depths. The first analysis (Schloerb et al., 2015) of data already obtained essentially in the Northern hemisphere have revealed large temperature variations with latitude, as well as distinct diurnal curves, most prominent in the 0.5 mm channel, indicating that the electric penetration depth for this channel is comparable to the diurnal thermal skin depth. Initial modelling of these data have indicated a low surface thermal inertia, in the range 10-30 J K-1 m-2 s-1/2 and probed depths of order 1-4 cm. We here investigate potential spatial variations of thermal and electrical properties by analysing separately the geomorphological regions described by Thomas et al. (2015). For each region, we select measurements corresponding to those areas, obtained at different local times and effective latitudes. We model the thermal profiles with depth and the outgoing mm and submm radiation for different values of the thermal inertia and of the ratio of the electrical to the thermal skin depth. We will present the best estimates of thermal inertia and electric/thermal depth ratios for each region selected. Additional information on subsurface temperature gradients may be inferred by using observations at varying emergence angles.The thermal emission from southern regions has been analysed by Choukroun et al (2015) during the polar night. Now that the comet has reached

  9. 67P/CG morphological units and VIS-IR spectral classes: a Rosetta/VIRTIS-M perspective

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; De Sanctis, Maria Cristina; Tosi, Federico; Piccioni, Giuseppe; Cerroni, Priscilla; Capria, Maria Teresa; Palomba, Ernesto; Longobardo, Andrea; Migliorini, Alessandra; Erard, Stephane; Arnold, Gabriele; Bockelee-Morvan, Dominique; Leyrat, Cedric; Schmitt, Bernard; Quirico, Eric; Barucci, Antonella; McCord, Thomas B.; Stephan, Katrin; Kappel, David

    2015-11-01

    VIRTIS-M, the 0.25-5.1 µm imaging spectrometer on Rosetta (Coradini et al., 2007), has mapped the surface of 67P/CG nucleus since July 2014 from a wide range of distances. Spectral analysis of global scale data indicate that the nucleus presents different terrains uniformly covered by a very dark (Ciarniello et al., 2015) and dehydrated organic-rich material (Capaccioni et al., 2015). The morphological units identified so far (Thomas et al., 2015; El-Maarry et al., 2015) include dust-covered brittle materials regions (like Ash, Ma'at), exposed material regions (Seth), large-scale depressions (like Hatmehit, Aten, Nut), smooth terrains units (like Hapi, Anubis, Imhotep) and consolidated surfaces (like Hathor, Anuket, Aker, Apis, Khepry, Bastet, Maftet). For each of these regions average VIRTIS-M spectra were derived with the aim to explore possible connections between morphology and spectral properties. Photometric correction (Ciarniello et al., 2015), thermal emission removal in the 3.5-5 micron range and georeferencing have been applied to I/F data in order to derive spectral indicators, e.g. VIS-IR spectral slopes, their crossing wavelength (CW) and the 3.2 µm organic material band’s depth (BD), suitable to identify and map compositional variations. Our analysis shows that smooth terrains have the lower slopes in VIS (<1.7E-3 1/µm) and IR (0.4E-3 1/µm), CW=0.75 µm and BD=8-12%. Intermediate VIS slope=1.7-1.9E-3 1/µm, and higher BD=10-12.8%, are typical of consolidated surfaces, some dust covered regions and Seth where the maximum BD=13% has been observed. Large-scale depressions and Imhotep are redder with a VIS slope of 1.9-2.1E-3 1/µm, CW at 0.85-0.9 µm and BD=8-11%. The minimum VIS-IR slopes are observed above the Hapi, in agreement with the presence of water ice sublimation and recondensation processes observed by VIRTIS in this region (De Sanctis et al., 2015).Authors acknowledge ASI, CNES, DLR and NASA financial support.References:-Coradini et al

  10. GIADA On-Board Rosetta: Early Dust Grain Detections and Dust Coma Characterization of Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Rotundi, A.; Della Corte, V.; Accolla, M.; Ferrari, M.; Ivanovski, S.; Lucarelli, F.; Mazzotta Epifani, E.; Sordini, R.; Palumbo, P.; Colangeli, L.; Lopez-Moreno, J. J.; Rodriguez, J.; Fulle, M.; Bussoletti, E.; Crifo, J. F.; Esposito, F.; Green, S.; Grün, E.; Lamy, P. L.; McDonnell, T.; Mennella, V.; Molina, A.; Moreno, F.; Ortiz, J. L.; Palomba, E.; Perrin, J. M.; Rodrigo, R.; Weissman, P. R.; Zakharov, V.; Zarnecki, J.

    2014-12-01

    GIADA (Grain Impact Analyzer and Dust Accumulator) flying on-board Rosetta is devoted to study the cometary dust environment of 67P/Churiumov-Gerasimenko. GIADA is composed of 3 sub-systems: the GDS (Grain Detection System), based on grain detection through light scattering; an IS (Impact Sensor), giving momentum measurement detecting the impact on a sensed plate connected with 5 piezoelectric sensors; the MBS (MicroBalances System), constituted of 5 Quartz Crystal Microbalances (QCMs), giving cumulative deposited dust mass by measuring the variations of the sensors' frequency. The combination of the measurements performed by these 3 subsystems provides: the number, the mass, the momentum and the velocity distribution of dust grains emitted from the cometary nucleus.No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such large heliocentric distances are available up to date. We present here the first results obtained from the beginning of the Rosetta scientific phase. We will report dust grains early detection at about 800 km from the nucleus in August 2014 and the following measurements that allowed us characterizing the 67P/C-G dust environment at distances less than 100 km from the nucleus and single grains dynamical properties. Acknowledgements. GIADA was built by a consortium led by the Univ. Napoli "Parthenope" & INAF-Oss. Astr. Capodimonte, IT, in collaboration with the Inst. de Astrofisica de Andalucia, ES, Selex-ES s.p.a. and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with a support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developped from a PI proposal supported by the University of Kent; sci. & tech. contribution given by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project

  11. A parallel Lanczos method for symmetric generalized eigenvalue problems

    SciTech Connect

    Wu, K.; Simon, H.D.

    1997-12-01

    Lanczos algorithm is a very effective method for finding extreme eigenvalues of symmetric matrices. It requires less arithmetic operations than similar algorithms, such as, the Arnoldi method. In this paper, the authors present their parallel version of the Lanczos method for symmetric generalized eigenvalue problem, PLANSO. PLANSO is based on a sequential package called LANSO which implements the Lanczos algorithm with partial re-orthogonalization. It is portable to all parallel machines that support MPI and easy to interface with most parallel computing packages. Through numerical experiments, they demonstrate that it achieves similar parallel efficiency as PARPACK, but uses considerably less time.

  12. CH Packaging Operations Manual

    SciTech Connect

    Washington TRU Solutions LLC

    2005-06-13

    This procedure provides instructions for assembling the CH Packaging Drum payload assembly, Standard Waste Box (SWB) assembly, Abnormal Operations and ICV and OCV Preshipment Leakage Rate Tests on the packaging seals, using a nondestructive Helium (He) Leak Test.

  13. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele; Antonini, David

    2008-01-01

    This viewgraph presentation describes a comparative packaging study for use on long duration space missions. The topics include: 1) Purpose; 2) Deliverables; 3) Food Sample Selection; 4) Experimental Design Matrix; 5) Permeation Rate Comparison; and 6) Packaging Material Information.

  14. ADVANCED ELECTRONIC PACKAGING TECHNIQUES

    DTIC Science & Technology

    MICROMINIATURIZATION (ELECTRONICS), *PACKAGED CIRCUITS, CIRCUITS, EXPERIMENTAL DATA, MANUFACTURING, NONDESTRUCTIVE TESTING, RESISTANCE (ELECTRICAL), SEMICONDUCTORS, TESTS, THIN FILMS (STORAGE DEVICES), WELDING.

  15. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  16. Trends in Food Packaging.

    ERIC Educational Resources Information Center

    Ott, Dana B.

    1988-01-01

    This article discusses developments in food packaging, processing, and preservation techniques in terms of packaging materials, technologies, consumer benefits, and current and potential food product applications. Covers implications due to consumer life-style changes, cost-effectiveness of packaging materials, and the ecological impact of…

  17. Edible packaging materials.

    PubMed

    Janjarasskul, Theeranun; Krochta, John M

    2010-01-01

    Research groups and the food and pharmaceutical industries recognize edible packaging as a useful alternative or addition to conventional packaging to reduce waste and to create novel applications for improving product stability, quality, safety, variety, and convenience for consumers. Recent studies have explored the ability of biopolymer-based food packaging materials to carry and control-release active compounds. As diverse edible packaging materials derived from various by-products or waste from food industry are being developed, the dry thermoplastic process is advancing rapidly as a feasible commercial edible packaging manufacturing process. The employment of nanocomposite concepts to edible packaging materials promises to improve barrier and mechanical properties and facilitate effective incorporation of bioactive ingredients and other designed functions. In addition to the need for a more fundamental understanding to enable design to desired specifications, edible packaging has to overcome challenges such as regulatory requirements, consumer acceptance, and scaling-up research concepts to commercial applications.

  18. Linked-View Parallel Coordinate Plot Renderer

    SciTech Connect

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  19. Large area LED package

    NASA Astrophysics Data System (ADS)

    Goullon, L.; Jordan, R.; Braun, T.; Bauer, J.; Becker, F.; Hutter, M.; Schneider-Ramelow, M.; Lang, K.-D.

    2015-03-01

    Solid state lighting using LED-dies is a rapidly growing market. LED-dies with the needed increasing luminous flux per chip area produce a lot of heat. Therefore an appropriate thermal management is required for general lighting with LEDdies. One way to avoid overheating and shorter lifetime is the use of many small LED-dies on a large area heat sink (down to 70 μm edge length), so that heat can spread into a large area while at the same time light also appears on a larger area. The handling with such small LED-dies is very difficult because they are too small to be picked with common equipment. Therefore a new concept called collective transfer bonding using a temporary carrier chip was developed. A further benefit of this new technology is the high precision assembly as well as the plane parallel assembly of the LED-dies which is necessary for wire bonding. It has been shown that hundred functional LED-dies were transferred and soldered at the same time. After the assembly a cost effective established PCB-technology was applied to produce a large-area light source consisting of many small LED-dies and electrically connected on a PCB-substrate. The top contacts of the LED-dies were realized by laminating an adhesive copper sheet followed by LDI structuring as known from PCB-via-technology. This assembly can be completed by adding converting and light forming optical elements. In summary two technologies based on standard SMD and PCB technology have been developed for panel level LED packaging up to 610x 457 mm2 area size.

  20. Smart packaging for photonics

    SciTech Connect

    Smith, J.H.; Carson, R.F.; Sullivan, C.T.; McClellan, G.; Palmer, D.W.

    1997-09-01

    Unlike silicon microelectronics, photonics packaging has proven to be low yield and expensive. One approach to make photonics packaging practical for low cost applications is the use of {open_quotes}smart{close_quotes} packages. {open_quotes}Smart{close_quotes} in this context means the ability of the package to actuate a mechanical change based on either a measurement taken by the package itself or by an input signal based on an external measurement. One avenue of smart photonics packaging, the use of polysilicon micromechanical devices integrated with photonic waveguides, was investigated in this research (LDRD 3505.340). The integration of optical components with polysilicon surface micromechanical actuation mechanisms shows significant promise for signal switching, fiber alignment, and optical sensing applications. The optical and stress properties of the oxides and nitrides considered for optical waveguides and how they are integrated with micromechanical devices were investigated.

  1. First in-situ detection of the cometary ammonium ion NH_4+ (protonated ammonia NH3) in the coma of 67P/C-G near perihelion

    NASA Astrophysics Data System (ADS)

    Beth, A.; Altwegg, K.; Balsiger, H.; Berthelier, J.-J.; Calmonte, U.; Combi, M. R.; De Keyser, J.; Dhooghe, F.; Fiethe, B.; Fuselier, S. A.; Galand, M.; Gasc, S.; Gombosi, T. I.; Hansen, K. C.; Hässig, M.; Héritier, K. L.; Kopp, E.; Le Roy, L.; Mandt, K. E.; Peroy, S.; Rubin, M.; Sémon, T.; Tzou, C.-Y.; Vigren, E.

    2017-01-01

    In this paper, we report the first in-situ detection of the ammonium ion NH_4+ at 67P/Churyumov-Gerasimenko (67P/C-G) in a cometary coma, using the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) / Double Focusing Mass Spectrometer (DFMS). Unlike neutral and ion spectrometers onboard previous cometary missions, the ROSINA/DFMS spectrometer, when operated in ion mode, offers the capability to distinguish NH_4+ from H2O+ in a cometary coma. We present here the ion data analysis of mass-to-charge ratios 18 and 19 at high spectral resolution and compare the results with an ionospheric model to put the these results into context. The model confirms that the ammonium ion NH_4+ is one of the most abundant ion species, as predicted, in the coma near perihelion.

  2. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  3. Parallel hypergraph partitioning for scientific computing.

    SciTech Connect

    Heaphy, Robert; Devine, Karen Dragon; Catalyurek, Umit; Bisseling, Robert; Hendrickson, Bruce Alan; Boman, Erik Gunnar

    2005-07-01

    Graph partitioning is often used for load balancing in parallel computing, but it is known that hypergraph partitioning has several advantages. First, hypergraphs more accurately model communication volume, and second, they are more expressive and can better represent nonsymmetric problems. Hypergraph partitioning is particularly suited to parallel sparse matrix-vector multiplication, a common kernel in scientific computing. We present a parallel software package for hypergraph (and sparse matrix) partitioning developed at Sandia National Labs. The algorithm is a variation on multilevel partitioning. Our parallel implementation is novel in that it uses a two-dimensional data distribution among processors. We present empirical results that show our parallel implementation achieves good speedup on several large problems (up to 33 million nonzeros) with up to 64 processors on a Linux cluster.

  4. GENERAL PURPOSE ADA PACKAGES

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package

  5. The ZOOM minimization package

    SciTech Connect

    Fischler, Mark S.; Sachs, D.; /Fermilab

    2004-11-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.

  6. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  7. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  8. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  9. WASTE PACKAGE TRANSPORTER DESIGN

    SciTech Connect

    D.C. Weddle; R. Novotny; J. Cron

    1998-09-23

    The purpose of this Design Analysis is to develop preliminary design of the waste package transporter used for waste package (WP) transport and related functions in the subsurface repository. This analysis refines the conceptual design that was started in Phase I of the Viability Assessment. This analysis supports the development of a reliable emplacement concept and a retrieval concept for license application design. The scope of this analysis includes the following activities: (1) Assess features of the transporter design and evaluate alternative design solutions for mechanical components. (2) Develop mechanical equipment details for the transporter. (3) Prepare a preliminary structural evaluation for the transporter. (4) Identify and recommend the equipment design for waste package transport and related functions. (5) Investigate transport equipment interface tolerances. This analysis supports the development of the waste package transporter for the transport, emplacement, and retrieval of packaged radioactive waste forms in the subsurface repository. Once the waste containers are closed and accepted, the packaged radioactive waste forms are termed waste packages (WP). This terminology was finalized as this analysis neared completion; therefore, the term disposal container is used in several references (i.e., the System Description Document (SDD)) (Ref. 5.6). In this analysis and the applicable reference documents, the term ''disposal container'' is synonymous with ''waste package''.

  10. Project Information Packages: Overview.

    ERIC Educational Resources Information Center

    RMC Research Corp., Mountain View, CA.

    This brochure describes a new series of Project Information Packages, a U.S. Office of Education response to the need for a systematic approach to disseminating exemplary projects. The packages describe procedures for developing the necessary administrative support and management framework, as well as instructional methods and techniques. The six…

  11. Packaging issues: avoiding delamination.

    PubMed

    Hall, R

    2005-10-01

    Manufacturers can minimise delamination occurrence by applying the appropriate packaging design and process features. The end user can minimise the impact of fibre tear and reduce subsequent delamination by careful package opening. The occasional inconvenient delamination is a small price to pay for the high level of sterility assurance that comes with the use of Tyvek.

  12. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  13. Packaging Concerns/Techniques for Large Devices

    NASA Technical Reports Server (NTRS)

    Sampson, Michael J.

    2009-01-01

    This slide presentation reviews packaging challenges and options for electronic parts. The presentation includes information about non-hermetic packages, space challenges for packaging and complex package variations.

  14. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  15. STRUMPACK -- STRUctured Matrices PACKage

    SciTech Connect

    2014-12-01

    STRUMPACK - STRUctured Matrices PACKage - is a package for computations with sparse and dense structured matrix, i.e., matrices that exhibit some kind of low-rank property, in particular Hierarchically Semi Separable structure (HSS). Such matrices appear in many applications, e.g., FEM, BEM, Integral equations. etc. Exploiting this structure using certain compression algorithms allow for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. STRUMPACK has presently two main components: a distributed-memory dense matrix computations package and a shared-memory sparse direct solver.

  16. Packaging for Posterity.

    ERIC Educational Resources Information Center

    Sias, Jim

    1990-01-01

    A project in which students designed environmentally responsible food packaging is described. The problem definition; research on topics such as waste paper, plastic, metal, glass, incineration, recycling, and consumer preferences; and the presentation design are provided. (KR)

  17. The CONSERT Instrument during Philae's Descent onto 67P/C-G's surface: Insights on Philae's Attitude and the Surface Permittivity Measurements at the Agilkia-Landing-Site

    NASA Astrophysics Data System (ADS)

    Plettemeier, D.; Statz, C.; Hahnel, R.; Hegler, S.; Herique, A.; Pasquero, P.; Rogez, Y.; Zine, S.; Ciarletti, V.; Kofman, W. W.

    2015-12-01

    The main scientific objective of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard ESA spacecraft Rosetta is the dielectric characterization of comet 67P/Churyumov-Gerasimenko's nucleus. This was done by means of bi-static radio propagation measurements of the CONSERT instrument between the lander Philae launched onto the comet's surface and the orbiter Rosetta. The CONSERT unit aboard the lander was receiving and processing the radio signal emitted by the orbiter counterpart of the instrument. The lander unit was then retransmitting a signal back to the orbiter. This happened at a time scale of milliseconds. In addition to the operation at the first science sequence, CONSERT was operated during the separation and descent of Philae onto the comet's surface. During the descent phase of Philae the received CONSERT signal was a superposition of the direct propagation path between Rosetta and Philae and indirect paths caused by reflections of 67P/C-G's surface. From peak power measurements of the dominant direct path between Rosetta and Philae during the descent we were able to reconstruct the lander's attitude and estimate the spin rate of the lander along the descent trajectory. Certain operations and manoeuvres of orbiter and lander, e.g. the deployment of the lander legs and CONSERT antennas or the orbiter change of attitude in order to orient the science towards the assumed lander position, are also visible in the CONSERT data. The information gained on the landers attitude is used in the reconstruction of the dielectric properties of 67P/C-G's surface and near subsurface (metric to decametric scale). During roughly the last third of the descent, the comet's surface is visible for the CONSERT instrument enabling a mean permittivity estimation of the surface and near subsurface covered by the instruments footprint along the descent path. The comparatively large timespan with surface signatures exhibits a good spatial diversity

  18. The Polycomb group (PcG) protein EZH2 supports the survival of PAX3-FOXO1 alveolar rhabdomyosarcoma by repressing FBXO32 (Atrogin1/MAFbx).

    PubMed

    Ciarapica, R; De Salvo, M; Carcarino, E; Bracaglia, G; Adesso, L; Leoncini, P P; Dall'Agnese, A; Walters, Z S; Verginelli, F; De Sio, L; Boldrini, R; Inserra, A; Bisogno, G; Rosolen, A; Alaggio, R; Ferrari, A; Collini, P; Locatelli, M; Stifani, S; Screpanti, I; Rutella, S; Yu, Q; Marquez, V E; Shipley, J; Valente, S; Mai, A; Miele, L; Puri, P L; Locatelli, F; Palacios, D; Rota, R

    2014-08-07

    The Polycomb group (PcG) proteins regulate stem cell differentiation via the repression of gene transcription, and their deregulation has been widely implicated in cancer development. The PcG protein Enhancer of Zeste Homolog 2 (EZH2) works as a catalytic subunit of the Polycomb Repressive Complex 2 (PRC2) by methylating lysine 27 on histone H3 (H3K27me3), a hallmark of PRC2-mediated gene repression. In skeletal muscle progenitors, EZH2 prevents an unscheduled differentiation by repressing muscle-specific gene expression and is downregulated during the course of differentiation. In rhabdomyosarcoma (RMS), a pediatric soft-tissue sarcoma thought to arise from myogenic precursors, EZH2 is abnormally expressed and its downregulation in vitro leads to muscle-like differentiation of RMS cells of the embryonal variant. However, the role of EZH2 in the clinically aggressive subgroup of alveolar RMS, characterized by the expression of PAX3-FOXO1 oncoprotein, remains unknown. We show here that EZH2 depletion in these cells leads to programmed cell death. Transcriptional derepression of F-box protein 32 (FBXO32) (Atrogin1/MAFbx), a gene associated with muscle homeostasis, was evidenced in PAX3-FOXO1 RMS cells silenced for EZH2. This phenomenon was associated with reduced EZH2 occupancy and H3K27me3 levels at the FBXO32 promoter. Simultaneous knockdown of FBXO32 and EZH2 in PAX3-FOXO1 RMS cells impaired the pro-apoptotic response, whereas the overexpression of FBXO32 facilitated programmed cell death in EZH2-depleted cells. Pharmacological inhibition of EZH2 by either 3-Deazaneplanocin A or a catalytic EZH2 inhibitor mirrored the phenotypic and molecular effects of EZH2 knockdown in vitro and prevented tumor growth in vivo. Collectively, these results indicate that EZH2 is a key factor in the proliferation and survival of PAX3-FOXO1 alveolar RMS cells working, at least in part, by repressing FBXO32. They also suggest that the reducing activity of EZH2 could represent a novel

  19. Battery packaging - Technology review

    SciTech Connect

    Maiser, Eric

    2014-06-16

    This paper gives a brief overview of battery packaging concepts, their specific advantages and drawbacks, as well as the importance of packaging for performance and cost. Production processes, scaling and automation are discussed in detail to reveal opportunities for cost reduction. Module standardization as an additional path to drive down cost is introduced. A comparison to electronics and photovoltaics production shows 'lessons learned' in those related industries and how they can accelerate learning curves in battery production.

  20. A survey of packages for large linear systems

    SciTech Connect

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to very large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user

  1. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele H.; Oziomek, Thomas V.

    2009-01-01

    Future long duration manned space flights beyond low earth orbit will require the food system to remain safe, acceptable and nutritious. Development of high barrier food packaging will enable this requirement by preventing the ingress and egress of gases and moisture. New high barrier food packaging materials have been identified through a trade study. Practical application of this packaging material within a shelf life test will allow for better determination of whether this material will allow the food system to meet given requirements after the package has undergone processing. The reason to conduct shelf life testing, using a variety of packaging materials, stems from the need to preserve food used for mission durations of several years. Chemical reactions that take place during longer durations may decrease food quality to a point where crew physical or psychological well-being is compromised. This can result in a reduction or loss of mission success. The rate of chemical reactions, including oxidative rancidity and staling, can be controlled by limiting the reactants, reducing the amount of energy available to drive the reaction, and minimizing the amount of water available. Water not only acts as a media for microbial growth, but also as a reactant and means by which two reactants may come into contact with each other. The objective of this study is to evaluate three packaging materials for potential use in long duration space exploration missions.

  2. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  3. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  4. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  5. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2008-01-12

    The purpose of this program guidance document is to provide the technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package (also known as the "RH-TRU 72-B cask") and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the C of C shall govern. The C of C states: "...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." It further states: "...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) Contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8, "Deliberate Misconduct." Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the U.S. Department of Energy (DOE) Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, "Packaging and Transportation of Radioactive Material," certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21, "Reporting of Defects and Noncompliance," regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a

  6. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2008-09-11

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the pplication." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  7. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2009-06-01

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  8. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; Holst, Terry L. (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  9. Parallel CFD design on network-based computer

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advanced computational fluid dynamics codes, which can be computationally expensive on mainframe supercomputers. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computing environment utilizing a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package is applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  10. DMA Modulus as a Screening Parameter for Compatibility of Polymeric Containment Materials with Various Solutions for use in Space Shuttle Microgravity Protein Crystal Growth (PCG) Experiments

    NASA Technical Reports Server (NTRS)

    Wingard, Charles Doug; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Protein crystals are grown in microgravity experiments inside the Space Shuttle during orbit. Such crystals are basically grown in a five-component system containing a salt, buffer, polymer, organic and water. During these experiments, a number of different polymeric containment materials must be compatible with up to hundreds of different PCG solutions in various concentrations for durations up to 180 days. When such compatibility experiments are performed at NASA/MSFC (Marshall Space Flight Center) simultaneously on containment material samples immersed in various solutions in vials, the samples are rather small out of necessity. DMA4 modulus was often used as the primary screening parameter for such small samples as a pass/fail criterion for incompatibility issues. In particular, the TA Instruments DMA 2980 film tension clamp was used to test rubber O-rings as small in I.D. as 0.091 in. by cutting through the cross-section at one place, then clamping the stretched linear cord stock at each end. The film tension clamp was also used to successfully test short length samples of medical/surgical grade tubing with an O.D. of 0.125 in.

  11. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2005-02-28

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.

  12. Food Packaging Materials

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The photos show a few of the food products packaged in Alure, a metallized plastic material developed and manufactured by St. Regis Paper Company's Flexible Packaging Division, Dallas, Texas. The material incorporates a metallized film originally developed for space applications. Among the suppliers of the film to St. Regis is King-Seeley Thermos Company, Winchester, Ma'ssachusetts. Initially used by NASA as a signal-bouncing reflective coating for the Echo 1 communications satellite, the film was developed by a company later absorbed by King-Seeley. The metallized film was also used as insulating material for components of a number of other spacecraft. St. Regis developed Alure to meet a multiple packaging material need: good eye appeal, product protection for long periods and the ability to be used successfully on a wide variety of food packaging equipment. When the cost of aluminum foil skyrocketed, packagers sought substitute metallized materials but experiments with a number of them uncovered problems; some were too expensive, some did not adequately protect the product, some were difficult for the machinery to handle. Alure offers a solution. St. Regis created Alure by sandwiching the metallized film between layers of plastics. The resulting laminated metallized material has the superior eye appeal of foil but is less expensive and more easily machined. Alure effectively blocks out light, moisture and oxygen and therefore gives the packaged food long shelf life. A major packaging firm conducted its own tests of the material and confirmed the advantages of machinability and shelf life, adding that it runs faster on machines than materials used in the past and it decreases product waste; the net effect is increased productivity.

  13. Detecting small holes in packages

    DOEpatents

    Kronberg, J.W.; Cadieux, J.R.

    1996-03-19

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package are disclosed. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package. 3 figs.

  14. Detecting small holes in packages

    DOEpatents

    Kronberg, James W.; Cadieux, James R.

    1996-01-01

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package.

  15. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2006-04-25

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package TransporterModel II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant| (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations(CFR) §71.8. Any time a user suspects or has indications that the conditions ofapproval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  16. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2007-12-13

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  17. MMIC packaging with Waffleline

    NASA Astrophysics Data System (ADS)

    Perry, R. W.; Ellis, T. T.; Schineller, E. R.

    1990-06-01

    The design principle of Waffleline, a patented MMIC packaging technology, is discussed, and several recent applications are described and illustrated with drawings, diagrams, and photographs. Standard Waffleline is a foil-covered waffle-iron-like grid with dielectric-coated signal and power wires running in the channels and foil-removed holes for mounting prepackaged chips or chip carriers. With spacing of 50 mils between center conductors, this material is applicable at frequencies up to 40 GHz; EHF devices require Waffleline with 25-mil spacing. Applications characterized include a subassembly for a man-transportable SHF satellite-communication terminal, a transmitter driver for a high-power TWT, and a 60-GHz receiver front end (including an integrated monolithic microstrip antenna, a low-noise amplifier, a mixer, and an IF amplifier in a 0.25-inch-thick 1.6-inch-diameter package). The high package density and relatively low cost of Waffleline are emphasized.

  18. Ada Namelist Package

    NASA Technical Reports Server (NTRS)

    Klumpp, Allan R.

    1991-01-01

    Ada Namelist Package, developed for Ada programming language, enables calling program to read and write FORTRAN-style namelist files. Features are: handling of any combination of types defined by user; ability to read vectors, matrices, and slices of vectors and matrices; handling of mismatches between variables in namelist file and those in programmed list of namelist variables; and ability to avoid searching entire input file for each variable. Principle benefits derived by user: ability to read and write namelist-readable files, ability to detect most file errors in initialization phase, and organization keeping number of instantiated units to few packages rather than to many subprograms.

  19. SPHINX experimenters information package

    SciTech Connect

    Zarick, T.A.

    1996-08-01

    This information package was prepared for both new and experienced users of the SPHINX (Short Pulse High Intensity Nanosecond X-radiator) flash X-Ray facility. It was compiled to help facilitate experiment design and preparation for both the experimenter(s) and the SPHINX operational staff. The major areas covered include: Recording Systems Capabilities,Recording System Cable Plant, Physical Dimensions of SPHINX and the SPHINX Test cell, SPHINX Operating Parameters and Modes, Dose Rate Map, Experiment Safety Approval Form, and a Feedback Questionnaire. This package will be updated as the SPHINX facilities and capabilities are enhanced.

  20. AN ADA NAMELIST PACKAGE

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested

  1. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  2. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  3. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  4. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  5. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  6. Waste disposal package

    DOEpatents

    Smith, M.J.

    1985-06-19

    This is a claim for a waste disposal package including an inner or primary canister for containing hazardous and/or radioactive wastes. The primary canister is encapsulated by an outer or secondary barrier formed of a porous ceramic material to control ingress of water to the canister and the release rate of wastes upon breach on the canister. 4 figs.

  7. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-11-04

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  8. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-01-01

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  9. Automatic Differentiation Package

    SciTech Connect

    Gay, David M.; Phipps, Eric; Bratlett, Roscoe

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  10. Learning Activity Package, Algebra.

    ERIC Educational Resources Information Center

    Evans, Diane

    A set of ten teacher-prepared Learning Activity Packages (LAPs) in beginning algebra and nine in intermediate algebra, these units cover sets, properties of operations, number systems, open expressions, solution sets of equations and inequalities in one and two variables, exponents, factoring and polynomials, relations and functions, radicals,…

  11. YWCA Vocational Readiness Package.

    ERIC Educational Resources Information Center

    Scott, Jeanne

    This document outlines, in detail, the Vocational Readiness Package for young girls, which is a week-long program utilizing simulation games and role-playing, while employing peer group counseling techniques to dramatize the realities concerning women in marriage and careers today. After three years of using this program, the authors have compiled…

  12. Radiographic film package

    SciTech Connect

    Muylle, W. E.

    1985-08-27

    A radiographic film package for non-destructive testing, comprising a radiographic film sheet, an intensifying screen with a layer of lead bonded to a paper foil, and a vacuum heat-sealed wrapper with a layer of aluminum and a heat-sealed easy-peelable thermoplastic layer.

  13. Project Information Packages Kit.

    ERIC Educational Resources Information Center

    RMC Research Corp., Mountain View, CA.

    Presented are an overview booklet, a project selection guide, and six Project Information Packages (PIPs) for six exemplary projects serving underachieving students in grades k through 9. The overview booklet outlines the PIP projects and includes a chart of major project features. A project selection guide reviews the PIP history, PIP contents,…

  14. Packaging, transportation of LLW

    SciTech Connect

    Shelton, P.

    1994-12-31

    This presentation is an overview of the regulations and requirements for the packaging and transportation of low-level radioactive wastes. United States Environmental Protection Agency and Department of Transportation regulations governing the classification of wastes and the transport documentation are also described.

  15. Nutrition Learning Packages.

    ERIC Educational Resources Information Center

    World Health Organization, Geneva (Switzerland).

    This book presents nine packages of learning materials for trainers to use in teaching community health workers to carry out the nutrition element of their jobs. Lessons are intended to help health workers acquire skill in presenting to communities the principles and practice of good nutrition. Responding to the most common causes of poor…

  16. Electro-Microfluidic Packaging

    NASA Astrophysics Data System (ADS)

    Benavides, G. L.; Galambos, P. C.

    2002-06-01

    There are many examples of electro-microfluidic products that require cost effective packaging solutions. Industry has responded to a demand for products such as drop ejectors, chemical sensors, and biological sensors. Drop ejectors have consumer applications such as ink jet printing and scientific applications such as patterning self-assembled monolayers or ejecting picoliters of expensive analytes/reagents for chemical analysis. Drop ejectors can be used to perform chemical analysis, combinatorial chemistry, drug manufacture, drug discovery, drug delivery, and DNA sequencing. Chemical and biological micro-sensors can sniff the ambient environment for traces of dangerous materials such as explosives, toxins, or pathogens. Other biological sensors can be used to improve world health by providing timely diagnostics and applying corrective measures to the human body. Electro-microfluidic packaging can easily represent over fifty percent of the product cost and, as with Integrated Circuits (IC), the industry should evolve to standard packaging solutions. Standard packaging schemes will minimize cost and bring products to market sooner.

  17. High Efficiency Integrated Package

    SciTech Connect

    Ibbetson, James

    2013-09-15

    Solid-state lighting based on LEDs has emerged as a superior alternative to inefficient conventional lighting, particularly incandescent. LED lighting can lead to 80 percent energy savings; can last 50,000 hours – 2-50 times longer than most bulbs; and contains no toxic lead or mercury. However, to enable mass adoption, particularly at the consumer level, the cost of LED luminaires must be reduced by an order of magnitude while achieving superior efficiency, light quality and lifetime. To become viable, energy-efficient replacement solutions must deliver system efficacies of ≥ 100 lumens per watt (LPW) with excellent color rendering (CRI > 85) at a cost that enables payback cycles of two years or less for commercial applications. This development will enable significant site energy savings as it targets commercial and retail lighting applications that are most sensitive to the lifetime operating costs with their extended operating hours per day. If costs are reduced substantially, dramatic energy savings can be realized by replacing incandescent lighting in the residential market as well. In light of these challenges, Cree proposed to develop a multi-chip integrated LED package with an output of > 1000 lumens of warm white light operating at an efficacy of at least 128 LPW with a CRI > 85. This product will serve as the light engine for replacement lamps and luminaires. At the end of the proposed program, this integrated package was to be used in a proof-of-concept lamp prototype to demonstrate the component’s viability in a common form factor. During this project Cree SBTC developed an efficient, compact warm-white LED package with an integrated remote color down-converter. Via a combination of intensive optical, electrical, and thermal optimization, a package design was obtained that met nearly all project goals. This package emitted 1295 lm under instant-on, room-temperature testing conditions, with an efficacy of 128.4 lm/W at a color temperature of ~2873

  18. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  19. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  20. An Object-Oriented Serial DSMC Simulation Package

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Cai, Chunpei

    2011-05-01

    A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.

  1. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  2. Food packaging history and innovations.

    PubMed

    Risch, Sara J

    2009-09-23

    Food packaging has evolved from simply a container to hold food to something today that can play an active role in food quality. Many packages are still simply containers, but they have properties that have been developed to protect the food. These include barriers to oxygen, moisture, and flavors. Active packaging, or that which plays an active role in food quality, includes some microwave packaging as well as packaging that has absorbers built in to remove oxygen from the atmosphere surrounding the product or to provide antimicrobials to the surface of the food. Packaging has allowed access to many foods year-round that otherwise could not be preserved. It is interesting to note that some packages have actually allowed the creation of new categories in the supermarket. Examples include microwave popcorn and fresh-cut produce, which owe their existence to the unique packaging that has been developed.

  3. Packaging legislation. Objectives and consequences.

    PubMed

    Christmann, H

    1995-05-01

    The recently published Directive on packaging and packaging waste makes new demands on the industry. This article highlights the key areas and raises some of the issues that must be confronted in the future.

  4. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  5. Sustainable Library Development Training Package

    ERIC Educational Resources Information Center

    Peace Corps, 2012

    2012-01-01

    This Sustainable Library Development Training Package supports Peace Corps' Focus In/Train Up strategy, which was implemented following the 2010 Comprehensive Agency Assessment. Sustainable Library Development is a technical training package in Peace Corps programming within the Education sector. The training package addresses the Volunteer…

  6. Anticounterfeit packaging technologies

    PubMed Central

    Shah, Ruchir Y.; Prajapati, Prajesh N.; Agrawal, Y. K.

    2010-01-01

    Packaging is the coordinated system that encloses and protects the dosage form. Counterfeit drugs are the major cause of morbidity, mortality, and failure of public interest in the healthcare system. High price and well-known brands make the pharma market most vulnerable, which accounts for top priority cardiovascular, obesity, and antihyperlipidemic drugs and drugs like sildenafil. Packaging includes overt and covert technologies like barcodes, holograms, sealing tapes, and radio frequency identification devices to preserve the integrity of the pharmaceutical product. But till date all the available techniques are synthetic and although provide considerable protection against counterfeiting, have certain limitations which can be overcome by the application of natural approaches and utilization of the principles of nanotechnology. PMID:22247875

  7. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions, LLC

    2003-08-25

    The purpose of this program guidance document is to provide technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the SARP and/or C of C shall govern. The C of C states: ''...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, ''Operating Procedures,'' of the application.'' It further states: ''...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, ''Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC approved, users need to be familiar with 10 CFR {section} 71.11, ''Deliberate Misconduct.'' Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document details the instructions to be followed to operate, maintain, and test the RH-TRU 72-B packaging. This Program Guidance standardizes instructions for all users. Users shall follow these instructions. Following these instructions assures that operations are safe and meet the requirements of the SARP. This document is available on the Internet at: ttp://www.ws/library/t2omi/t2omi.htm. Users are responsible for ensuring they are using the current revision and change notices. Sites may prepare their own document using the word

  8. TIDEV: Tidal Evolution package

    NASA Astrophysics Data System (ADS)

    Cuartas-Restrepo, P.; Melita, M.; Zuluaga, J.; Portilla, B.; Sucerquia, M.; Miloni, O.

    2016-09-01

    TIDEV (Tidal Evolution package) calculates the evolution of rotation for tidally interacting bodies using Efroimsky-Makarov-Williams (EMW) formalism. The package integrates tidal evolution equations and computes the rotational and dynamical evolution of a planet under tidal and triaxial torques. TIDEV accounts for the perturbative effects due to the presence of the other planets in the system, especially the secular variations of the eccentricity. Bulk parameters include the mass and radius of the planet (and those of the other planets involved in the integration), the size and mass of the host star, the Maxwell time and Andrade's parameter. TIDEV also calculates the time scale that a planet takes to be tidally locked as well as the periods of rotation reached at the end of the spin-orbit evolution.

  9. Fair Package Assignment

    NASA Astrophysics Data System (ADS)

    Lahaie, Sébastien; Parkes, David C.

    We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.

  10. Software packager user's guide

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.

  11. Aquaculture information package

    SciTech Connect

    Boyd, T.; Rafferty, K.

    1998-08-01

    This package of information is intended to provide background information to developers of geothermal aquaculture projects. The material is divided into eight sections and includes information on market and price information for typical species, aquaculture water quality issues, typical species culture information, pond heat loss calculations, an aquaculture glossary, regional and university aquaculture offices and state aquaculture permit requirements. A bibliography containing 68 references is also included.

  12. Trilinos Web Interface Package

    SciTech Connect

    Hu, Jonathan; Phenow, Michael N.; Sala, Marzio; Tuminaro, Ray S.

    2006-09-01

    WebTrilinos is a scientific portal, a web-based environment to use several Trilinos packages through the web. If you are a teaching sparse linear algebra, you can use WebTrilinos to present code snippets and simple scripts, and let the students execute them from their browsers. If you want to test linear algebra solvers, you can use the MatrixPortal module, and you just have to select problems and options, then plot the results in nice graphs.

  13. The GITEWS ocean bottom sensor packages

    NASA Astrophysics Data System (ADS)

    Boebel, O.; Busack, M.; Flueh, E. R.; Gouretski, V.; Rohr, H.; Macrander, A.; Krabbenhoeft, A.; Motz, M.; Radtke, T.

    2010-08-01

    The German-Indonesian Tsunami Early Warning System (GITEWS) aims at reducing the risks posed by events such as the 26 December 2004 Indian Ocean tsunami. To minimize the lead time for tsunami alerts, to avoid false alarms, and to accurately predict tsunami wave heights, real-time observations of ocean bottom pressure from the deep ocean are required. As part of the GITEWS infrastructure, the parallel development of two ocean bottom sensor packages, PACT (Pressure based Acoustically Coupled Tsunameter) and OBU (Ocean Bottom Unit), was initiated. The sensor package requirements included bidirectional acoustic links between the bottom sensor packages and the hosting surface buoys, which are moored nearby. Furthermore, compatibility between these sensor systems and the overall GITEWS data-flow structure and command hierarchy was mandatory. While PACT aims at providing highly reliable, long term bottom pressure data only, OBU is based on ocean bottom seismometers to concurrently record sea-floor motion, necessitating highest data rates. This paper presents the technical design of PACT, OBU and the HydroAcoustic Modem (HAM.node) which is used by both systems, along with first results from instrument deployments off Indonesia.

  14. 78 FR 19007 - Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... COMMISSION Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof.... 1337, on behalf of Lamina Packaging Innovations LLC of Longview, Texas. An amended complaint was filed... importation of certain products having laminated packaging, laminated packaging, and components thereof...

  15. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  16. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  17. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  18. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  19. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  20. ISSUES ASSOCIATED WITH SAFE PACKAGING AND TRANSPORT OF NANOPARTICLES

    SciTech Connect

    Gupta, N.; Smith, A.

    2011-02-14

    Nanoparticles have long been recognized a hazardous substances by personnel working in the field. They are not, however, listed as a separate, distinct category of dangerous goods at present. As dangerous goods or hazardous substances, they require packaging and transportation practices which parallel the established practices for hazardous materials transport. Pending establishment of a distinct category for such materials by the Department of Transportation, existing consensus or industrial protocols must be followed. Action by DOT to establish appropriate packaging and transport requirements is recommended.

  1. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  2. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  3. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Topology C, Ada, C++, Data-parallel FORTRAN, 2D mesh of node boards, each node FORTRAN-90 (late 1992) board has 1 application processor Devopment Tools ...parallel machines become the wave of the present, tools are increasingly needed to assist programmers in creating parallel tasks and coordinating...their activities. Linda was designed to be such a tool . Linda was designed with three important goals in mind: to be portable, efficient, and easy to use

  4. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  5. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  6. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  7. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  8. Laboratory Measurements of Synthetic Pyroxenes and their Mixtures with Iron Sulfides as Inorganic Refractory Analogues for Rosetta/VIRTIS' Surface Composition Analysis of 67P/CG

    NASA Astrophysics Data System (ADS)

    Markus, Kathrin; Arnold, Gabriele; Moroz, Ljuba; Henckel, Daniela; Kappel, David; Capaccioni, Fabrizio; Filacchione, Gianrico; Schmitt, Bernard; Tosi, Federico; Érard, Stéphane; Bockelee-Morvan, Dominique; Leyrat, Cedric; VIRTIS Team

    2016-10-01

    The Visible and InfraRed Thermal Imaging Spectrometer VIRTIS on board Rosetta provided 0.25-5.1 µm spectra of 67P/CG's surface (Capaccioni et al., 2015). Thermally corrected reflectance spectra display a low albedo of 0.06 at 0.65 µm, different red VIS and IR spectral slopes, and a broad 3.2 µm band. This absorption feature is due to refractory surface constituents attributed to organic components, but other refractory constituents influence albedo and spectral slopes. Possible contributions of inorganic components to spectral characteristics and spectral variations across the surface should be understood based on laboratory studies and spectral modeling. Although a wide range of silicate compositions was found in "cometary" anhydrous IDPs and cometary dust, Mg-rich crystalline mafic minerals are dominant silicate components. A large fraction of silicate grains are Fe-free enstatites and forsterites that are not found in terrestrial rocks but can be synthesized in order to provide a basis for laboratory studies and comparison with VIRTIS data. We report the results of the synthesis, analyses, and spectral reflectance measurements of Fe-free low-Ca pyroxenes (ortho- and clinoenstatites). These minerals are generally very bright and almost spectrally featureless. However, even trace amounts of Fe-ions produce a significant decrease in the near-UV reflectance and hence can contribute to slope variations. Iron sulfides (troilite, pyrrhotite) are among the most plausible phases responsible for the low reflectance of 67P's surface from the VIS to the NIR. The darkening efficiency of these opaque phases is strongly particle-size dependent. Here we present a series of reflectance spectra of fine-grained synthetic enstatite powders mixed in various proportions with iron sulfide powders. The influence of dark sulfides on reflectance in the near-UV to near-IR spectral ranges is investigated. This study can contribute to understand the shape of reflectance spectra of 67P

  9. Distribution of H2O and CO2 in the inner coma of 67P/CG as observed by VIRTIS-M onboard Rosetta

    NASA Astrophysics Data System (ADS)

    Capaccioni, F.

    2015-10-01

    VIRTIS (Visible, Infrared and Thermal Imaging Spectrometers) is a dual channel spectrometer; VIRTIS-M (M for Mapper) is a hyper-spectral imager covering a wide spectral range with two detectors: a CCD (VIS) ranging from 0.25 through 1.0 μm and an HgCdTe detector (IR) covering the 1.0 through 5.1 μm region. VIRTIS-M uses a slit and a scan mirror to generate images with spatial resolution of 250 μrad over a FOV of 64 mrad. The second channel is VIRTIS-H (H for High resolution), a point spectrometer with high spectral resolution (λ/Δλ=3000@3 μm) in the range 2-5 μm [1].The VIRTIS instrument has been used to investigate the molecular composition of the coma of 67P/CG by observing resonant fluorescent excitation in the 2 to 5 μm spectral region. The spectrum consists of emission bands superimposed on a background continuum. The strongest features are the bands of H2O at 2.7 μm and the CO2 band at 4.27 μm [1]. The high spectral resolution of VIRTIS-H obtains a detailed description of the fluorescent bands, while the mapping capability of VIRTIS-M extends the coverage in the spatial dimension to map and monitor the abundance of water and carbon dioxide in space and time. We have already reported [2,3,4] some preliminary observations by VIRTIS of H2O and CO2 in the coma. In the present work we perform a systematic mapping of the distribution and variability of these molecules using VIRTIS-M measurements of their band areas. All the spectra were carefully selected to avoid contamination due to nucleus radiance. A median filter is applied on the spatial dimensions of each data cube to minimise the pixel-to-pixel residual variability. This is at the expense of some reduction in the spatial resolution, which is still in the order of few tens of metres and thus adequate for the study of the spatial distribution of the volatiles. Typical spectra are shown in Figure 1

  10. Packaging - Materials review

    SciTech Connect

    Herrmann, Matthias

    2014-06-16

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  11. Packaging - Materials review

    NASA Astrophysics Data System (ADS)

    Herrmann, Matthias

    2014-06-01

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  12. Components of Adenovirus Genome Packaging

    PubMed Central

    Ahi, Yadvinder S.; Mittal, Suresh K.

    2016-01-01

    Adenoviruses (AdVs) are icosahedral viruses with double-stranded DNA (dsDNA) genomes. Genome packaging in AdV is thought to be similar to that seen in dsDNA containing icosahedral bacteriophages and herpesviruses. Specific recognition of the AdV genome is mediated by a packaging domain located close to the left end of the viral genome and is mediated by the viral packaging machinery. Our understanding of the role of various components of the viral packaging machinery in AdV genome packaging has greatly advanced in recent years. Characterization of empty capsids assembled in the absence of one or more components involved in packaging, identification of the unique vertex, and demonstration of the role of IVa2, the putative packaging ATPase, in genome packaging have provided compelling evidence that AdVs follow a sequential assembly pathway. This review provides a detailed discussion on the functions of the various viral and cellular factors involved in AdV genome packaging. We conclude by briefly discussing the roles of the empty capsids, assembly intermediates, scaffolding proteins, portal vertex and DNA encapsidating enzymes in AdV assembly and packaging. PMID:27721809

  13. Aristos Optimization Package

    SciTech Connect

    Ridzal, Danis

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.

  14. Parallel Analog-to-Digital Image Processor

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.

    1987-01-01

    Proposed integrated-circuit network of many identical units convert analog outputs of imaging arrays of x-ray or infrared detectors to digital outputs. Converter located near imaging detectors, within cryogenic detector package. Because converter output digital, lends itself well to multiplexing and to postprocessing for correction of gain and offset errors peculiar to each picture element and its sampling and conversion circuits. Analog-to-digital image processor is massively parallel system for processing data from array of photodetectors. System built as compact integrated circuit located near local plane. Buffer amplifier for each picture element has different offset.

  15. Safety Analysis Report for packaging (onsite) steel waste package

    SciTech Connect

    BOEHNKE, W.M.

    2000-07-13

    The steel waste package is used primarily for the shipment of remote-handled radioactive waste from the 324 Building to the 200 Area for interim storage. The steel waste package is authorized for shipment of transuranic isotopes. The maximum allowable radioactive material that is authorized is 500,000 Ci. This exceeds the highway route controlled quantity (3,000 A{sub 2}s) and is a type B packaging.

  16. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  17. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  18. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  19. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  1. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2013-07-31

    This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

  2. Japan's electronic packaging technologies

    NASA Technical Reports Server (NTRS)

    Tummala, Rao R.; Pecht, Michael

    1995-01-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  3. Japan's electronic packaging technologies

    NASA Astrophysics Data System (ADS)

    Tummala, Rao R.; Pecht, Michael

    1995-02-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  4. Signal processor packaging design

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Phipps, Mickie A.

    1993-10-01

    The Signal Processor Packaging Design (SPPD) program was a technology development effort to demonstrate that a miniaturized, high throughput programmable processor could be fabricated to meet the stringent environment imposed by high speed kinetic energy guided interceptor and missile applications. This successful program culminated with the delivery of two very small processors, each about the size of a large pin grid array package. Rockwell International's Tactical Systems Division in Anaheim, California developed one of the processors, and the other was developed by Texas Instruments' (TI) Defense Systems and Electronics Group (DSEG) of Dallas, Texas. The SPPD program was sponsored by the Guided Interceptor Technology Branch of the Air Force Wright Laboratory's Armament Directorate (WL/MNSI) at Eglin AFB, Florida and funded by SDIO's Interceptor Technology Directorate (SDIO/TNC). These prototype processors were subjected to rigorous tests of their image processing capabilities, and both successfully demonstrated the ability to process 128 X 128 infrared images at a frame rate of over 100 Hz.

  5. Space station power semiconductor package

    NASA Technical Reports Server (NTRS)

    Balodis, Vilnis; Berman, Albert; Devance, Darrell; Ludlow, Gerry; Wagner, Lee

    1987-01-01

    A package of high-power switching semiconductors for the space station have been designed and fabricated. The package includes a high-voltage (600 volts) high current (50 amps) NPN Fast Switching Power Transistor and a high-voltage (1200 volts), high-current (50 amps) Fast Recovery Diode. The package features an isolated collector for the transistors and an isolated anode for the diode. Beryllia is used as the isolation material resulting in a thermal resistance for both devices of .2 degrees per watt. Additional features include a hermetical seal for long life -- greater than 10 years in a space environment. Also, the package design resulted in a low electrical energy loss with the reduction of eddy currents, stray inductances, circuit inductance, and capacitance. The required package design and device parameters have been achieved. Test results for the transistor and diode utilizing the space station package is given.

  6. IN-PACKAGE CHEMISTRY ABSTRACTION

    SciTech Connect

    E. Thomas

    2005-07-14

    This report was developed in accordance with the requirements in ''Technical Work Plan for Postclosure Waste Form Modeling'' (BSC 2005 [DIRS 173246]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as a function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model, which uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model, which is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials, and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed (CDSP) waste packages containing high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor diffusing into the waste package, and (2) seepage water entering the waste package as a liquid from the drift. (1) Vapor-Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H{sub 2}O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Liquid-Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package.

  7. Performance characteristics of a cosmology package on leading HPCarchitectures

    SciTech Connect

    Carter, Jonathan; Borrill, Julian; Oliker, Leonid

    2004-01-01

    The Cosmic Microwave Background (CMB) is a snapshot of the Universe some 400,000 years after the Big Bang. The pattern of anisotropies in the CMB carries a wealth of information about the fundamental parameters of cosmology. Extracting this information is an extremely computationally expensive endeavor, requiring massively parallel computers and software packages capable of exploiting them. One such package is the Microwave Anisotropy Dataset Computational Analysis Package (MADCAP) which has been used to analyze data from a number of CMB experiments. In this work, we compare MADCAP performance on the vector-based Earth Simulator (ES) and Cray X1 architectures and two leading superscalar systems, the IBM Power3 and Power4. Our results highlight the complex interplay between the problem size, architectural paradigm, interconnect, and vendor-supplied numerical libraries, while isolating the I/O file system as the key bottleneck across all the platforms.

  8. Effective Parallel Algorithm Animation

    DTIC Science & Technology

    1994-03-01

    a text file that is suitable for a plotting package such as gnuplot (. This representation is show 66 in Figure 21. The horizontal axis in this diagram...Explorer (75), and Gnuplot 81 .9 1111-__ _ _ UU Figue 2. SytemStrctur DigramfortheAnalsisToo 82n 0 0 (82). Since gnuplot is freely available on work...stations at AFIT and the plotting requirements are limited, gnuplot was used as the plotting package. However, users are free to select their own if they

  9. Naval Waste Package Design Report

    SciTech Connect

    M.M. Lewis

    2004-03-15

    A design methodology for the waste packages and ancillary components, viz., the emplacement pallets and drip shields, has been developed to provide designs that satisfy the safety and operational requirements of the Yucca Mountain Project. This methodology is described in the ''Waste Package Design Methodology Report'' Mecham 2004 [DIRS 166168]. To demonstrate the practicability of this design methodology, four waste package design configurations have been selected to illustrate the application of the methodology. These four design configurations are the 21-pressurized water reactor (PWR) Absorber Plate waste package, the 44-boiling water reactor (BWR) waste package, the 5-defense high-level waste (DHLW)/United States (U.S.) Department of Energy (DOE) spent nuclear fuel (SNF) Co-disposal Short waste package, and the Naval Canistered SNF Long waste package. Also included in this demonstration is the emplacement pallet and continuous drip shield. The purpose of this report is to document how that design methodology has been applied to the waste package design configurations intended to accommodate naval canistered SNF. This demonstrates that the design methodology can be applied successfully to this waste package design configuration and support the License Application for construction of the repository.

  10. Hazardous materials package performance regulations

    SciTech Connect

    Russell, N. A.; Glass, R. E.; McClure, J. D.; Finley, N. C.

    1991-01-01

    This paper discusses a hazardous materials Hazmat Packaging Performance Evaluation (HPPE) project being conducted at Sandia National Laboratories for the US Department of Transportation Research Special Programs Administration (DOT-RSPA) to look at the subset of bulk packagings that are larger than 2000 gallons. The objectives of this project are to evaluate current hazmat specification packagings and develop supporting documentation for determining performance requirements for packagings in excess of 2000 gallons that transport hazardous materials that have been classified as extremely toxic by inhalation (METBI).

  11. About the ZOOM minimization package

    SciTech Connect

    Fischler, M.; Sachs, D.; /Fermilab

    2004-11-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.

  12. Package Up Your Troubles--An Introduction to Package Libraries

    ERIC Educational Resources Information Center

    Frank, Colin

    1978-01-01

    Discusses a "package deal" library--a prefabricated building including interior furnishing--in terms of costs, fitness for purpose, and interior design, i.e., shelving, flooring, heating, lighting, and humidity. Advantages and disadvantages of the package library are also considered. (Author/MBR)

  13. The Packaging Handbook -- A guide to package design

    SciTech Connect

    Shappert, L.B.

    1995-12-31

    The Packaging Handbook is a compilation of 14 technical chapters and five appendices that address the life cycle of a packaging which is intended to transport radioactive material by any transport mode in normal commerce. Although many topics are discussed in depth, this document focuses on the design aspects of a packaging. The Handbook, which is being prepared under the direction of the US Department of Energy, is intended to provide a wealth of technical guidance that will give designers a better understanding of the regulatory approval process, preferences of regulators in specific aspects of packaging design, and the types of analyses that should be seriously considered when developing the packaging design. Even though the Handbook is concerned with all packagings, most of the emphasis is placed on large packagings that are capable of transporting large radioactive sources that are also fissile (e.g., spent fuel). These are the types of packagings that must address the widest range of technical topics in order to meet domestic and international regulations. Most of the chapters in the Handbook have been drafted and submitted to the Oak Ridge National Laboratory for editing; the majority of these have been edited. This report summarizes the contents.

  14. Anhydrous Ammonia Training Module. Trainer's Package. Participant's Package.

    ERIC Educational Resources Information Center

    Beaudin, Bart; And Others

    This document contains a trainer's and a participant's package for teaching employees on site safe handling procedures for working with anhydrous ammonia, especially on farms. The trainer's package includes the following: a description of the module; a competency; objectives; suggested instructional aids; a training outline (or lesson plan) for…

  15. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  16. Tritium waste package

    DOEpatents

    Rossmassler, Rich; Ciebiera, Lloyd; Tulipano, Francis J.; Vinson, Sylvester; Walters, R. Thomas

    1995-01-01

    A containment and waste package system for processing and shipping tritium xide waste received from a process gas includes an outer drum and an inner drum containing a disposable molecular sieve bed (DMSB) seated within outer drum. The DMSB includes an inlet diffuser assembly, an outlet diffuser assembly, and a hydrogen catalytic recombiner. The DMSB absorbs tritium oxide from the process gas and converts it to a solid form so that the tritium is contained during shipment to a disposal site. The DMSB is filled with type 4A molecular sieve pellets capable of adsorbing up to 1000 curies of tritium. The recombiner contains a sufficient amount of catalyst to cause any hydrogen add oxygen present in the process gas to recombine to form water vapor, which is then adsorbed onto the DMSB.

  17. Tritium waste package

    DOEpatents

    Rossmassler, R.; Ciebiera, L.; Tulipano, F.J.; Vinson, S.; Walters, R.T.

    1995-11-07

    A containment and waste package system for processing and shipping tritium oxide waste received from a process gas includes an outer drum and an inner drum containing a disposable molecular sieve bed (DMSB) seated within the outer drum. The DMSB includes an inlet diffuser assembly, an outlet diffuser assembly, and a hydrogen catalytic recombiner. The DMSB absorbs tritium oxide from the process gas and converts it to a solid form so that the tritium is contained during shipment to a disposal site. The DMSB is filled with type 4A molecular sieve pellets capable of adsorbing up to 1000 curies of tritium. The recombiner contains a sufficient amount of catalyst to cause any hydrogen and oxygen present in the process gas to recombine to form water vapor, which is then adsorbed onto the DMSB. 1 fig.

  18. The LISA Technology Package

    NASA Technical Reports Server (NTRS)

    Livas, Jeff

    2009-01-01

    The LISA Technology Package (LTP) is the payload of the European Space Agency's LISA Pathfinder mission. LISA Pathfinder was instigated to test, in a flight environment, the critical technologies required by LISA; namely, the inertial sensing subsystem and associated control laws and micro-Newton thrusters required to place a macroscopic test mass in pure free-fall. The UP is in the late stages of development -- all subsystems are currently either in the final stages of manufacture or in test. Available flight units are being integrated into the real-time testbeds for system verification tests. This poster will describe the UP and its subsystems, give the current status of the hardware and test campaign, and outline the future milestones leading to the UP delivery.

  19. Balloon gondola diagnostics package

    NASA Astrophysics Data System (ADS)

    Cantor, K. M.

    1986-10-01

    In order to define a new gondola structural specification and to quantify the balloon termination environment, NASA developed a balloon gondola diagnostics package (GDP). This addition to the balloon flight train is comprised of a large array of electronic sensors employed to define the forces and accelerations imposed on a gondola during the termination event. These sensors include the following: a load cell, a three-axis accelerometer, two three-axis rate gyros, two magnetometers, and a two axis inclinometer. A transceiver couple allows the data to be telemetered across any in-line rotator to the gondola-mounted memory system. The GDP is commanded 'ON' just prior to parachute deployment in order to record the entire event.

  20. Electro-Microfluidic Packaging

    SciTech Connect

    BENAVIDES, GILBERT L.; GALAMBOS, PAUL C.

    2002-06-01

    Electro-microfluidics is experiencing explosive growth in new product developments. There are many commercial applications for electro-microfluidic devices such as chemical sensors, biological sensors, and drop ejectors for both printing and chemical analysis. The number of silicon surface micromachined electro-microfluidic products is likely to increase. Manufacturing efficiency and integration of microfluidics with electronics will become important. Surface micromachined microfluidic devices are manufactured with the same tools as IC's (integrated circuits) and their fabrication can be incorporated into the IC fabrication process. In order to realize applications for devices must be developed. An Electro-Microfluidic Dual In-line Package (EMDIP{trademark}) was developed surface micromachined electro-microfluidic devices, a practical method for getting fluid into these to be a standard solution that allows for both the electrical and the fluidic connections needed to operate a great variety of electro-microfluidic devices. The EMDIP{trademark} includes a fan-out manifold that, on one side, mates directly with the 200 micron diameter Bosch etched holes found on the device, and, on the other side, mates to lager 1 mm diameter holes. To minimize cost the EMDIP{trademark} can be injection molded in a great variety of thermoplastics which also serve to optimize fluid compatibility. The EMDIP{trademark} plugs directly into a fluidic printed wiring board using a standard dual in-line package pattern for the electrical connections and having a grid of multiple 1 mm diameter fluidic connections to mate to the underside of the EMDIP{trademark}.

  1. Chip packaging technique

    NASA Technical Reports Server (NTRS)

    Jayaraj, Kumaraswamy (Inventor); Noll, Thomas E. (Inventor); Lockwood, Harry F. (Inventor)

    2001-01-01

    A hermetically sealed package for at least one semiconductor chip is provided which is formed of a substrate having electrical interconnects thereon to which the semiconductor chips are selectively bonded, and a lid which preferably functions as a heat sink, with a hermetic seal being formed around the chips between the substrate and the heat sink. The substrate is either formed of or includes a layer of a thermoplastic material having low moisture permeability which material is preferably a liquid crystal polymer (LCP) and is a multiaxially oriented LCP material for preferred embodiments. Where the lid is a heat sink, the heat sink is formed of a material having high thermal conductivity and preferably a coefficient of thermal expansion which substantially matches that of the chip. A hermetic bond is formed between the side of each chip opposite that connected to the substrate and the heat sink. The thermal bond between the substrate and the lid/heat sink may be a pinched seal or may be provided, for example by an LCP frame which is hermetically bonded or sealed on one side to the substrate and on the other side to the lid/heat sink. The chips may operate in the RF or microwave bands with suitable interconnects on the substrate and the chips may also include optical components with optical fibers being sealed into the substrate and aligned with corresponding optical components to transmit light in at least one direction. A plurality of packages may be physically and electrically connected together in a stack to form a 3D array.

  2. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  3. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-09-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

  4. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  5. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  6. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  7. Revisiting and parallelizing SHAKE

    NASA Astrophysics Data System (ADS)

    Weinbach, Yael; Elber, Ron

    2005-10-01

    An algorithm is presented for running SHAKE in parallel. SHAKE is a widely used approach to compute molecular dynamics trajectories with constraints. An essential step in SHAKE is the solution of a sparse linear problem of the type Ax = b, where x is a vector of unknowns. Conjugate gradient minimization (that can be done in parallel) replaces the widely used iteration process that is inherently serial. Numerical examples present good load balancing and are limited only by communication time.

  8. Solar water heater design package

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Package describes commercial domestic-hot-water heater with roof or rack mounted solar collectors. System is adjustable to pre-existing gas or electric hot-water house units. Design package includes drawings, description of automatic control logic, evaluation measurements, possible design variations, list of materials and installation tools, and trouble-shooting guide and manual.

  9. Individualized Learning Package about Etching.

    ERIC Educational Resources Information Center

    Sauer, Michael J.

    An individualized learning package provides step-by-step instruction in the fundamentals of the etching process. Thirteen specific behavioral objectives are listed. A pretest, consisting of matching 15 etching terms with their definitions, is provided along with an answer key. The remainder of the learning package teaches the 13 steps of the…

  10. The Macro - TIPS Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    The TIPS (Teaching Information Processing System) Course Package was designed to be used with the Macro-Games Course Package (SO 011 930) in order to train college students to apply the tools of economic analysis to current problems. TIPS is used to provide feedback and individualized assignments to students, as well as information about the…

  11. Chemical Energy: A Learning Package.

    ERIC Educational Resources Information Center

    Cohen, Ita; Ben-Zvi, Ruth

    1982-01-01

    A comprehensive teaching/learning chemical energy package was developed to overcome conceptual/experimental difficulties and time required for calculation of enthalpy changes. The package consists of five types of activities occuring in repeated cycles: group activities, laboratory experiments, inquiry questionnaires, teacher-led class…

  12. Microelectronics/electronic packaging potential

    NASA Technical Reports Server (NTRS)

    Sandeau, R. F.

    1977-01-01

    The trend toward smaller and lighter electronic packages was examined. It is suggested that electronic packaging engineers and microelectronic designers closely associate and give full attention to optimization of both disciplines on all product lines. Extensive research and development work underway to explore innovative ideas and make new inroads into the technology base, is expected to satisfy the demands of the 1980's.

  13. Oral Hygiene. Learning Activity Package.

    ERIC Educational Resources Information Center

    Hime, Kirsten

    This learning activity package on oral hygiene is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, a list of materials needed, a list of definitions, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics…

  14. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Packaging exceptions. 173.63 Section 173.63... SHIPMENTS AND PACKAGINGS Definitions, Classification and Packaging for Class 1 § 173.63 Packaging exceptions... which are used to project fastening devices. (2) Packaging for cartridges, small arms, and...

  15. 49 CFR 173.411 - Industrial packagings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... record retention applicable to Industrial Packaging Type 1 (IP-1), Industrial Packaging Type 2 (IP-2), and Industrial Packaging Type 3 (IP-3). (b) Industrial packaging certification and tests. (1) Each IP... specified in § 173.412(a) through (j). (4) Tank containers may be used as Industrial package Types 2 or...

  16. 19 CFR 191.13 - Packaging materials.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Packaging materials. 191.13 Section 191.13 Customs... (CONTINUED) DRAWBACK General Provisions § 191.13 Packaging materials. (a) Imported packaging material... packaging material when used to package or repackage merchandise or articles exported or destroyed...

  17. 19 CFR 191.13 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false Packaging materials. 191.13 Section 191.13 Customs... (CONTINUED) DRAWBACK General Provisions § 191.13 Packaging materials. (a) Imported packaging material... packaging material when used to package or repackage merchandise or articles exported or destroyed...

  18. Nanocomposite Sensors for Food Packaging

    NASA Astrophysics Data System (ADS)

    Avella, Maurizio; Errico, Maria Emanuela; Gentile, Gennaro; Volpe, Maria Grazia

    Nowadays nanotechnologies applied to the food packaging sector find always more applications due to a wide range of benefits that they can offer, such as improved barrier properties, improved mechanical performance, antimicrobial properties and so on. Recently many researches are addressed to the set up of new food packaging materials, in which polymer nanocomposites incorporate nanosensors, developing the so-called "smart" packaging. Some examples of nanocomposite sensors specifically realised for the food packaging industry are reported. The second part of this work deals with the preparation and characterisation of two new polymer-based nanocomposite systems that can be used as food packaging materials. Particularly the results concerning the following systems are illustrated: isotactic polypropylene (iPP) filled with CaCO3 nanoparticles and polycaprolactone (PCL) filled with SiO2 nanoparticles.

  19. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  20. Evaluation of RDBMS packages for use in astronomy

    NASA Technical Reports Server (NTRS)

    Page, C. G.; Davenhall, A. C.

    1992-01-01

    Tabular data sets arise in many areas of astronomical data analysis, from raw data (such as photon event lists) to final results (such as source catalogs). The Starlink catalog access and reporting package, SCAR, was originally developed to handle IRAS data and it has been the principal relational DBMS in the Starlink software collection for several years. But SCAR has many limitations and is VMS-specific, while Starlink is in transition from VMS to Unix. Rather than attempt a major re-write of SCAR for Unix, it seemed more sensible to see whether any existing database packages are suitable for general astronomical use. The authors first drew up a list of desirable properties for such a system and then used these criteria to evaluate a number of packages, both free ones and those commercially available. It is already clear that most commercial DBMS packages are not very well suited to the requirements; for example, most cannot carry out efficiently even fairly basic operations such as joining two catalogs on an approximate match of celestial positions. This paper reports the results of the evaluation exercise and notes the problems in using a standard DBMS package to process scientific data. In parallel with this the authors have started to develop a simple database engine that can handle tabular data in a range of common formats including simple direct-access files (such as SCAR and Exosat DBMS tables) and FITS tables (both ASCII and binary).

  1. User's Guide for ENSAERO_FE Parallel Finite Element Solver

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Guruswamy, Guru P.

    1999-01-01

    A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.

  2. 49 CFR 178.602 - Preparation of packagings and packages for testing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... testing at periodic intervals only (i.e., other than initial design qualification testing), at ambient... 49 Transportation 3 2011-10-01 2011-10-01 false Preparation of packagings and packages for testing...) SPECIFICATIONS FOR PACKAGINGS Testing of Non-bulk Packagings and Packages § 178.602 Preparation of packagings...

  3. 49 CFR 178.602 - Preparation of packagings and packages for testing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... testing at periodic intervals only (i.e., other than initial design qualification testing), at ambient... 49 Transportation 3 2012-10-01 2012-10-01 false Preparation of packagings and packages for testing...) SPECIFICATIONS FOR PACKAGINGS Testing of Non-bulk Packagings and Packages § 178.602 Preparation of packagings...

  4. 49 CFR 178.602 - Preparation of packagings and packages for testing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... testing at periodic intervals only (i.e., other than initial design qualification testing), at ambient... 49 Transportation 3 2014-10-01 2014-10-01 false Preparation of packagings and packages for testing...) SPECIFICATIONS FOR PACKAGINGS Testing of Non-bulk Packagings and Packages § 178.602 Preparation of packagings...

  5. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  6. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  7. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  8. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide products... for risk reduction....

  9. Parallel architectures for vision

    SciTech Connect

    Maresca, M. ); Lavin, M.A. ); Li, H. )

    1988-08-01

    Vision computing involves the execution of a large number of operations on large sets of structured data. Sequential computers cannot achieve the speed required by most of the current applications and therefore parallel architectural solutions have to be explored. In this paper the authors examine the options that drive the design of a vision oriented computer, starting with the analysis of the basic vision computation and communication requirements. They briefly review the classical taxonomy for parallel computers, based on the multiplicity of the instruction and data stream, and apply a recently proposed criterion, the degree of autonomy of each processor, to further classify fine-grain SIMD massively parallel computers. They identify three types of processor autonomy, namely operation autonomy, addressing autonomy, and connection autonomy. For each type they give the basic definitions and show some examples. They focus on the concept of connection autonomy, which they believe is a key point in the development of massively parallel architectures for vision. They show two examples of parallel computers featuring different types of connection autonomy - the Connection Machine and the Polymorphic-Torus - and compare their cost and benefit.

  10. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  11. In-Package Chemistry Abstraction

    SciTech Connect

    E. Thomas

    2004-11-09

    This report was developed in accordance with the requirements in ''Technical Work Plan for: Regulatory Integration Modeling and Analysis of the Waste Form and Waste Package'' (BSC 2004 [DIRS 171583]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model that uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model that is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed waste packages that contain both high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor that diffuses into the waste package, and (2) seepage water that enters the waste package from the drift as a liquid. (1) Vapor Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H2O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Water Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package. TSPA-LA uses the vapor influx case for the nominal scenario for simulations where the waste package has been

  12. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  13. Laser Welding in Electronic Packaging

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The laser has proven its worth in numerous high reliability electronic packaging applications ranging from medical to missile electronics. In particular, the pulsed YAG laser is an extremely flexible and versatile too] capable of hermetically sealing microelectronics packages containing sensitive components without damaging them. This paper presents an overview of details that must be considered for successful use of laser welding when addressing electronic package sealing. These include; metallurgical considerations such as alloy and plating selection, weld joint configuration, design of optics, use of protective gases and control of thermal distortions. The primary limitations on use of laser welding electronic for packaging applications are economic ones. The laser itself is a relatively costly device when compared to competing welding equipment. Further, the cost of consumables and repairs can be significant. These facts have relegated laser welding to use only where it presents a distinct quality or reliability advantages over other techniques of electronic package sealing. Because of the unique noncontact and low heat inputs characteristics of laser welding, it is an ideal candidate for sealing electronic packages containing MEMS devices (microelectromechanical systems). This paper addresses how the unique advantages of the pulsed YAG laser can be used to simplify MEMS packaging and deliver a product of improved quality.

  14. Naval Waste Package Design Sensitivity

    SciTech Connect

    T. Schmitt

    2006-12-13

    The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license application design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.

  15. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  16. Vacuum Packaging for Microelectromechanical Systems (MEMS)

    DTIC Science & Technology

    2002-10-01

    The Vacuum Packaging for MEMS Program focused on the development of an integrated set of packaging technologies which in totality provide a low cost...high volume product-neutral vacuum packaging capability which addresses all MEMS vacuum packaging requirements. The program balanced the need for...near term component and wafer-level vacuum packaging with the development of advanced high density wafer-level packaging solutions. Three vacuum

  17. Safety evaluation for packaging (onsite) concrete-lined waste packaging

    SciTech Connect

    Romano, T.

    1997-09-25

    The Pacific Northwest National Laboratory developed a package to ship Type A, non-transuranic, fissile excepted quantities of liquid or solid radioactive material and radioactive mixed waste to the Central Waste Complex for storage on the Hanford Site.

  18. Packaging of solid state devices

    DOEpatents

    Glidden, Steven C.; Sanders, Howard D.

    2006-01-03

    A package for one or more solid state devices in a single module that allows for operation at high voltage, high current, or both high voltage and high current. Low thermal resistance between the solid state devices and an exterior of the package and matched coefficient of thermal expansion between the solid state devices and the materials used in packaging enables high power operation. The solid state devices are soldered between two layers of ceramic with metal traces that interconnect the devices and external contacts. This approach provides a simple method for assembling and encapsulating high power solid state devices.

  19. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  20. Microelectronics packaging research directions for aerospace applications

    NASA Technical Reports Server (NTRS)

    Galbraith, L.

    2003-01-01

    The Roadmap begins with an assessment of needs from the microelectronics for aerospace applications viewpoint. Needs Assessment is divided into materials, packaging components, and radiation characterization of packaging.

  1. Electrical Performance of a High Temperature 32-I/O HTCC Alumina Package

    NASA Technical Reports Server (NTRS)

    Chen, Liang-Yu; Neudeck, Philip G.; Spry, David J.; Beheim, Glenn M.; Hunter, Gary W.

    2016-01-01

    A high temperature co-fired ceramic (HTCC) alumina material was previously electrically tested at temperatures up to 550 C, and demonstrated improved dielectric performance at high temperatures compared with the 96% alumina substrate that we used before, suggesting its potential use for high temperature packaging applications. This paper introduces a prototype 32-I/O (input/output) HTCC alumina package with platinum conductor for 500 C low-power silicon carbide (SiC) integrated circuits. The design and electrical performance of this package including parasitic capacitance and parallel conductance of neighboring I/Os from 100 Hz to 1 MHz in a temperature range from room temperature to 550 C are discussed in detail. The parasitic capacitance and parallel conductance of this package in the entire frequency and temperature ranges measured does not exceed 1.5 pF and 0.05 microsiemens, respectively. SiC integrated circuits using this package and compatible printed circuit board have been successfully tested at 500 C for over 3736 hours continuously, and at 700 C for over 140 hours. Some test examples of SiC integrated circuits with this packaging system are presented. This package is the key to prolonged T greater than or equal to 500 C operational testing of the new generation of SiC high temperature integrated circuits and other devices currently under development at NASA Glenn Research Center.

  2. CRUNCH_PARALLEL

    SciTech Connect

    Shumaker, Dana E.; Steefel, Carl I.

    2016-06-21

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  3. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  4. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  5. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  6. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  7. Xyce(™) Parallel Electronic Simulator

    SciTech Connect

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuit network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.

  8. New Packaging for Amplifier Slabs

    SciTech Connect

    Riley, M.; Thorsness, C.; Suratwala, T.; Steele, R.; Rogowski, G.

    2015-03-18

    The following memo provides a discussion and detailed procedure for a new finished amplifier slab shipping and storage container. The new package is designed to maintain an environment of <5% RH to minimize weathering.

  9. Spack: the Supercomputing Package Manager

    SciTech Connect

    Gamblin, T.

    2013-11-09

    The HPC software ecosystem is growing larger and more complex, but software distribution mechanisms have not kept up with this trend. Tools, Libraries, and applications need to run on multiple platforms and build with multiple compliers. Increasingly, packages leverage common software components, and building any one component requires building all of its dependencies. In HPC environments, ABI-incompatible interfaces (likeMPI), binary-incompatible compilers, and cross-compiled environments converge to make the build process a combinatoric nightmare. This obstacle deters many users from adopting useful tools, and others waste countless hours building and rebuilding tools. Many package managers exist to solve these problems for typical desktop environments, but none suits the unique needs of supercomputing facilities or users. To address these problems, we have Spack, a package manager that eases the task of managing software for end-users, across multiple platforms, package versions, compilers, and ABI incompatibilities.

  10. High Frequency Electronic Packaging Technology

    NASA Technical Reports Server (NTRS)

    Herman, M.; Lowry, L.; Lee, K.; Kolawa, E.; Tulintseff, A.; Shalkhauser, K.; Whitaker, J.; Piket-May, M.

    1994-01-01

    Commercial and government communication, radar, and information systems face the challenge of cost and mass reduction via the application of advanced packaging technology. A majority of both government and industry support has been focused on low frequency digital electronics.

  11. Handling difficult materials: Aseptic packaging

    SciTech Connect

    Lieb, K.

    1994-03-01

    Since aseptic packages, or drink boxes, were introduced in the US in the early 1980s, they have been praised for their convenience and berated for their lack of recyclability. As a result, aseptic packaging collection has been linked with that of milk cartons to increase the volume. The intervening years since the introduction of aseptic packaging have seen the drink box industry aggressively trying to create a recycling market for the boxes. Communities and schools have initiated programs, and recycling firms have allocated resources to see whether recycling aseptic packaging can work. Drink boxes are now recycled in 2.3 million homes in 15 states, and in 1,655 schools in 17 states. They are typically collected in school and curbside programs with other polyethylene coated (laminated) paperboard products such a milk cartons, and then baled and shipped to five major paper companies for recycling at eight facilities.

  12. Packaged bulk micromachined triglyceride biosensor

    NASA Astrophysics Data System (ADS)

    Mohanasundaram, S. V.; Mercy, S.; Harikrishna, P. V.; Rani, Kailash; Bhattacharya, Enakshi; Chadha, Anju

    2010-02-01

    Estimation of triglyceride concentration is important for the health and food industries. Use of solid state biosensors like Electrolyte Insulator Semiconductor Capacitors (EISCAP) ensures ease in operation with good accuracy and sensitivity when compared to conventional sensors. In this paper we report on packaging of miniaturized EISCAP sensors on silicon. The packaging involves glass to silicon bonding using adhesive. Since this kind of packaging is done at room temperature, it cannot damage the thin dielectric layers on the silicon wafer unlike the high temperature anodic bonding technique and can be used for sensors with immobilized enzyme without denaturing the enzyme. The packaging also involves a teflon capping arrangement which helps in easy handling of the bio-analyte solutions. The capping solves two problems. Firstly, it helps in the immobilization process where it ensures the enzyme immobilization happens only on one pit and secondly it helps with easy transport of the bio-analyte into the sensor pit for measurements.

  13. A portable implementation of ARPACK for distributed memory parallel architectures

    SciTech Connect

    Maschhoff, K.J.; Sorensen, D.C.

    1996-12-31

    ARPACK is a package of Fortran 77 subroutines which implement the Implicitly Restarted Arnoldi Method used for solving large sparse eigenvalue problems. A parallel implementation of ARPACK is presented which is portable across a wide range of distributed memory platforms and requires minimal changes to the serial code. The communication layers used for message passing are the Basic Linear Algebra Communication Subprograms (BLACS) developed for the ScaLAPACK project and Message Passing Interface(MPI).

  14. Packaging Review Guide for Reviewing Safety Analysis Reports for Packagings

    SciTech Connect

    DiSabatino, A; Biswas, D; DeMicco, M; Fisher, L E; Hafner, R; Haslam, J; Mok, G; Patel, C; Russell, E

    2007-04-12

    This Packaging Review Guide (PRG) provides guidance for Department of Energy (DOE) review and approval of packagings to transport fissile and Type B quantities of radioactive material. It fulfills, in part, the requirements of DOE Order 460.1B for the Headquarters Certifying Official to establish standards and to provide guidance for the preparation of Safety Analysis Reports for Packagings (SARPs). This PRG is intended for use by the Headquarters Certifying Official and his or her review staff, DOE Secretarial offices, operations/field offices, and applicants for DOE packaging approval. This PRG is generally organized at the section level in a format similar to that recommended in Regulatory Guide 7.9 (RG 7.9). One notable exception is the addition of Section 9 (Quality Assurance), which is not included as a separate chapter in RG 7.9. Within each section, this PRG addresses the technical and regulatory bases for the review, the manner in which the review is accomplished, and findings that are generally applicable for a package that meets the approval standards. This Packaging Review Guide (PRG) provides guidance for DOE review and approval of packagings to transport fissile and Type B quantities of radioactive material. It fulfills, in part, the requirements of DOE O 460.1B for the Headquarters Certifying Official to establish standards and to provide guidance for the preparation of Safety Analysis Reports for Packagings (SARPs). This PRG is intended for use by the Headquarters Certifying Official and his review staff, DOE Secretarial offices, operations/field offices, and applicants for DOE packaging approval. The primary objectives of this PRG are to: (1) Summarize the regulatory requirements for package approval; (2) Describe the technical review procedures by which DOE determines that these requirements have been satisfied; (3) Establish and maintain the quality and uniformity of reviews; (4) Define the base from which to evaluate proposed changes in scope

  15. Watermarking spot colors in packaging

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang

    2015-03-01

    In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.

  16. Polyhydroxyalkanoates (PHA) Bioplastic Packaging Materials

    DTIC Science & Technology

    2010-05-01

    FINAL REPORT Polyhydroxyalkanoates (PHA) Bioplastic Packaging Materials SERDP Project WP-1478 MAY 2010 Dr.Chris Schwier Metabolix... Bioplastic Packaging Materials 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER SI 1478 Dr. Chris Schwier 5e. TASK...polymers were produced using blends of branched, long chain-length PHA polymers with linear PHA polymers.      15. SUBJECT TERMS Bioplastic

  17. Parallel Coordinate Axes.

    ERIC Educational Resources Information Center

    Friedlander, Alex; And Others

    1982-01-01

    Several methods of numerical mappings other than the usual cartesian coordinate system are considered. Some examples using parallel axes representation, which are seen to lead to aesthetically pleasing or interesting configurations, are presented. Exercises with alternative representations can stimulate pupil imagination and exploration in…

  18. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  19. Parallel Dislocation Simulator

    SciTech Connect

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  20. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  1. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  2. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  3. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  4. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Quinn O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  5. Parallel Multigrid Equation Solver

    SciTech Connect

    Adams, Mark

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  6. Packaging food for radiation processing

    NASA Astrophysics Data System (ADS)

    Komolprasert, Vanee

    2016-12-01

    Irradiation can play an important role in reducing pathogens that cause food borne illness. Food processors and food safety experts prefer that food be irradiated after packaging to prevent post-irradiation contamination. Food irradiation has been studied for the last century. However, the implementation of irradiation on prepackaged food still faces challenges on how to assess the suitability and safety of these packaging materials used during irradiation. Irradiation is known to induce chemical changes to the food packaging materials resulting in the formation of breakdown products, so called radiolysis products (RP), which may migrate into foods and affect the safety of the irradiated foods. Therefore, the safety of the food packaging material (both polymers and adjuvants) must be determined to ensure safety of irradiated packaged food. Evaluating the safety of food packaging materials presents technical challenges because of the range of possible chemicals generated by ionizing radiation. These challenges and the U.S. regulations on food irradiation are discussed in this article.

  7. Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.

    SciTech Connect

    Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting; Russo, Thomas V.; Schiek, Richard L.; Sholander, Peter E.; Thornquist, Heidi K.; Verley, Jason C.

    2016-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.

  8. Rapid Active Sampling Package

    NASA Technical Reports Server (NTRS)

    Peters, Gregory

    2010-01-01

    A field-deployable, battery-powered Rapid Active Sampling Package (RASP), originally designed for sampling strong materials during lunar and planetary missions, shows strong utility for terrestrial geological use. The technology is proving to be simple and effective for sampling and processing materials of strength. Although this originally was intended for planetary and lunar applications, the RASP is very useful as a powered hand tool for geologists and the mining industry to quickly sample and process rocks in the field on Earth. The RASP allows geologists to surgically acquire samples of rock for later laboratory analysis. This tool, roughly the size of a wrench, allows the user to cut away swaths of weathering rinds, revealing pristine rock surfaces for observation and subsequent sampling with the same tool. RASPing deeper (.3.5 cm) exposes single rock strata in-situ. Where a geologist fs hammer can only expose unweathered layers of rock, the RASP can do the same, and then has the added ability to capture and process samples into powder with particle sizes less than 150 microns, making it easier for XRD/XRF (x-ray diffraction/x-ray fluorescence). The tool uses a rotating rasp bit (or two counter-rotating bits) that resides inside or above the catch container. The container has an open slot to allow the bit to extend outside the container and to allow cuttings to enter and be caught. When the slot and rasp bit are in contact with a substrate, the bit is plunged into it in a matter of seconds to reach pristine rock. A user in the field may sample a rock multiple times at multiple depths in minutes, instead of having to cut out huge, heavy rock samples for transport back to a lab for analysis. Because of the speed and accuracy of the RASP, hundreds of samples can be taken in one day. RASP-acquired samples are small and easily carried. A user can characterize more area in less time than by using conventional methods. The field-deployable RASP used a Ni

  9. Prevention policies addressing packaging and packaging waste: Some emerging trends.

    PubMed

    Tencati, Antonio; Pogutz, Stefano; Moda, Beatrice; Brambilla, Matteo; Cacia, Claudia

    2016-10-01

    Packaging waste is a major issue in several countries. Representing in industrialized countries around 30-35% of municipal solid waste yearly generated, this waste stream has steadily grown over the years even if, especially in Europe, specific recycling and recovery targets have been fixed. Therefore, an increasing attention starts to be devoted to prevention measures and interventions. Filling a gap in the current literature, this explorative paper is a first attempt to map the increasingly important phenomenon of prevention policies in the packaging sector. Through a theoretical sampling, 11 countries/states (7 in and 4 outside Europe) have been selected and analyzed by gathering and studying primary and secondary data. Results show evidence of three specific trends in packaging waste prevention policies: fostering the adoption of measures directed at improving packaging design and production through an extensive use of the life cycle assessment; raising the awareness of final consumers by increasing the accountability of firms; promoting collaborative efforts along the packaging supply chains.

  10. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  11. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  12. Think INSIDE the Box: Package Engineering

    ERIC Educational Resources Information Center

    Snyder, Mark; Painter, Donna

    2014-01-01

    Most products people purchase, keep in their homes, and often discard, are typically packaged in some way. Packaging is so prevalent in daily lives that many of take it for granted. That is by design-the expectation of good packaging is that it exists for the sake of the product. The primary purposes of any package (to contain, inform, display,…

  13. 7 CFR 58.626 - Packaging equipment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Packaging equipment. 58.626 Section 58.626 Agriculture....626 Packaging equipment. Packaging equipment designed to mechanically fill and close single service... Standards for Equipment for Packaging Frozen Desserts and Cottage Cheese. Quality Specifications for...

  14. 40 CFR 157.27 - Unit packaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Unit packaging. 157.27 Section 157.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PACKAGING REQUIREMENTS FOR PESTICIDES AND DEVICES Child-Resistant Packaging § 157.27 Unit packaging. Pesticide...

  15. 49 CFR 173.29 - Empty packagings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Empty packagings. 173.29 Section 173.29... SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.29 Empty packagings. (a) General. Except as otherwise provided in this section, an empty packaging containing only the residue of...

  16. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  17. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  18. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  19. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Packaging materials. 381.144 Section... Packaging materials. (a) Edible products may not be packaged in a container which is composed in whole or in... to health. All packaging materials must be safe for the intended use within the meaning of...

  20. Green Packaging Management of Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Zhang, Guirong; Zhao, Zongjian

    From the connotation of green logistics management, we discuss the principles of green packaging, and from the two levels of government and enterprises, we put forward a specific management strategy. The management of green packaging can be directly and indirectly promoted by laws, regulations, taxation, institutional and other measures. The government can also promote new investment to the development of green packaging materials, and establish specialized institutions to identify new packaging materials, standardization of packaging must also be accomplished through the power of the government. Business units of large scale through the packaging and container-based to reduce the use of packaging materials, develop and use green packaging materials and easy recycling packaging materials for proper packaging.

  1. Method of forming a package for mems-based fuel cell

    DOEpatents

    Morse, Jeffrey D.; Jankowski, Alan F.

    2004-11-23

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMOS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  2. Method of forming a package for MEMS-based fuel cell

    DOEpatents

    Morse, Jeffrey D; Jankowski, Alan F

    2013-05-21

    A MEMS-based fuel cell package and method thereof is disclosed. The fuel cell package comprises seven layers: (1) a sub-package fuel reservoir interface layer, (2) an anode manifold support layer, (3) a fuel/anode manifold and resistive heater layer, (4) a Thick Film Microporous Flow Host Structure layer containing a fuel cell, (5) an air manifold layer, (6) a cathode manifold support structure layer, and (7) a cap. Fuel cell packages with more than one fuel cell are formed by positioning stacks of these layers in series and/or parallel. The fuel cell package materials such as a molded plastic or a ceramic green tape material can be patterned, aligned and stacked to form three dimensional microfluidic channels that provide electrical feedthroughs from various layers which are bonded together and mechanically support a MEMS-based miniature fuel cell. The package incorporates resistive heating elements to control the temperature of the fuel cell stack. The package is fired to form a bond between the layers and one or more microporous flow host structures containing fuel cells are inserted within the Thick Film Microporous Flow Host Structure layer of the package.

  3. 75 FR 60333 - Hazardous Material; Miscellaneous Packaging Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... Hazardous Material; Miscellaneous Packaging Amendments AGENCY: Pipeline and Hazardous Materials Safety... materials packages may be considered a bulk packaging. The September 1, 2006 NPRM definition for ``bulk... erroneously stated Large Packagings would contain hazardous materials without an intermediate packaging,...

  4. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  5. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  6. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  7. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  8. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  9. Homology, convergence and parallelism

    PubMed Central

    Ghiselin, Michael T.

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  10. Parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Camberos, Jose; Merriam, Marshal

    1991-01-01

    A parallel unstructured grid generation algorithm is presented and implemented on the Hypercube. Different processor hierarchies are discussed, and the appropraite hierarchies for mesh generation and mesh smoothing are selected. A domain-splitting algorithm for unstructured grids which tries to minimize the surface-to-volume ratio of each subdomain is described. This splitting algorithm is employed both for grid generation and grid smoothing. Results obtained on the Hypercube demonstrate the effectiveness of the algorithms developed.

  11. Development of Parallel GSSHA

    DTIC Science & Technology

    2013-09-01

    C en te r Paul R. Eller , Jing-Ru C. Cheng, Aaron R. Byrd, Charles W. Downer, and Nawa Pradhan September 2013 Approved for public release...Program ERDC TR-13-8 September 2013 Development of Parallel GSSHA Paul R. Eller and Jing-Ru C. Cheng Information Technology Laboratory US Army Engineer...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Eller , Ruth Cheng, Aaron Byrd, Chuck Downer, and Nawa Pradhan 5d. PROJECT NUMBER

  12. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  13. Massively Parallel Genetics.

    PubMed

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance."

  14. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  15. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  16. 49 CFR 173.24a - Additional general requirements for non-bulk packagings and packages.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subchapter. (b) Non-bulk packaging filling limits. (1) A single or composite non-bulk packaging may be filled... gross mass marked on the packaging. (3) A single or composite non-bulk packaging which is tested and... marked on the packaging, or 1.2 if not marked. In addition: (i) A single or composite non-bulk...

  17. 49 CFR 178.602 - Preparation of packagings and packages for testing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) SPECIFICATIONS FOR PACKAGINGS Testing of Non-bulk Packagings and Packages § 178.602 Preparation of packagings and... which they may be used. The material to be transported in the packagings may be replaced by a non... the tests. (c) If the material to be transported is replaced for test purposes by a...

  18. Optimising a parallel conjugate gradient solver

    SciTech Connect

    Field, M.R.

    1996-12-31

    This work arises from the introduction of a parallel iterative solver to a large structural analysis finite element code. The code is called FEX and it was developed at Hitachi`s Mechanical Engineering Laboratory. The FEX package can deal with a large range of structural analysis problems using a large number of finite element techniques. FEX can solve either stress or thermal analysis problems of a range of different types from plane stress to a full three-dimensional model. These problems can consist of a number of different materials which can be modelled by a range of material models. The structure being modelled can have the load applied at either a point or a surface, or by a pressure, a centrifugal force or just gravity. Alternatively a thermal load can be applied with a given initial temperature. The displacement of the structure can be constrained by having a fixed boundary or by prescribing the displacement at a boundary.

  19. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  20. QCMPI: A parallel environment for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank; Juliá-Díaz, Bruno

    2009-06-01

    QCMPI is a quantum computer (QC) simulation package written in Fortran 90 with parallel processing capabilities. It is an accessible research tool that permits rapid evaluation of quantum algorithms for a large number of qubits and for various "noise" scenarios. The prime motivation for developing QCMPI is to facilitate numerical examination of not only how QC algorithms work, but also to include noise, decoherence, and attenuation effects and to evaluate the efficacy of error correction schemes. The present work builds on an earlier Mathematica code QDENSITY, which is mainly a pedagogic tool. In that earlier work, although the density matrix formulation was featured, the description using state vectors was also provided. In QCMPI, the stress is on state vectors, in order to employ a large number of qubits. The parallel processing feature is implemented by using the Message-Passing Interface (MPI) protocol. A description of how to spread the wave function components over many processors is provided, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors. These operators include the standard Pauli, Hadamard, CNOT and CPHASE gates and also Quantum Fourier transformation. These operators make up the actions needed in QC. Codes for Grover's search and Shor's factoring algorithms are provided as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to alternate noise effects, which corresponds to the idea of solving a stochastic Schrödinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its eigenvalues and associated entropy. Potential applications of this powerful tool include studies of the stability and correction of QC processes using Hamiltonian based dynamics. Program summaryProgram title: QCMPI Catalogue identifier: AECS_v1_0 Program summary URL

  1. Traffic simulations on parallel computers using domain decomposition techniques

    SciTech Connect

    Hanebutte, U.R.; Tentner, A.M.

    1995-12-31

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be achieved by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic simulations with the standard simulation package TRAF-NETSIM on a 128 nodes IBM SPx parallel supercomputer as well as on a cluster of SUN workstations. Whilst this particular parallel implementation is based on NETSIM, a microscopic traffic simulation model, the presented strategy is applicable to a broad class of traffic simulations. An outer iteration loop must be introduced in order to converge to a global solution. A performance study that utilizes a scalable test network that consist of square-grids is presented, which addresses the performance penalty introduced by the additional iteration loop.

  2. A parallel implementation of symmetric band reduction using PLAPACK

    SciTech Connect

    Wu, Yuan-Jye J.; Bischof, C.H.; Alpatov, P.A.

    1996-12-31

    Successive band reduction (SBR) is a two-phase approach for reducing a full symmetric matrix to tridiagonal (or narrow banded) form. In its simplest case, it consists of a full-to-band reduction followed by a band-to-tridiagonal reduction. Its richness in BLAS-3 operations makes it potentially more efficient on high-performance architectures than the traditional tridiagonalization method. However, a scalable, portable, general-purpose parallel implementation of SBR is still not available. In this article, we review some existing parallel tridiagonalization routines and describe the implementation of a full-to-band reduction routine using PLAPACK as a first step toward a parallel SBR toolbox. The PLAPACK-based routine turns out to be simple and efficient and, unlike the other existing packages, does not suffer restrictions on physical data layout or algorithmic block size.

  3. Flexible packaging for PV modules

    NASA Astrophysics Data System (ADS)

    Dhere, Neelkanth G.

    2008-08-01

    Economic, flexible packages that provide needed level of protection to organic and some other PV cells over >25-years have not yet been developed. However, flexible packaging is essential in niche large-scale applications. Typical configuration used in flexible photovoltaic (PV) module packaging is transparent frontsheet/encapsulant/PV cells/flexible substrate. Besides flexibility of various components, the solder bonds should also be flexible and resistant to fatigue due to cyclic loading. Flexible front sheets should provide optical transparency, mechanical protection, scratch resistance, dielectric isolation, water resistance, UV stability and adhesion to encapsulant. Examples are Tefzel, Tedlar and Silicone. Dirt can get embedded in soft layers such as silicone and obscure light. Water vapor transmittance rate (WVTR) of polymer films used in the food packaging industry as moisture barriers are ~0.05 g/(m2.day) under ambient conditions. In comparison, light emitting diodes employ packaging components that have WVTR of ~10-6 g/(m2.day). WVTR of polymer sheets can be improved by coating them with dense inorganic/organic multilayers. Ethylene vinyl acetate, an amorphous copolymer used predominantly by the PV industry has very high O2 and H2O diffusivity. Quaternary carbon chains (such as acetate) in a polymer lead to cleavage and loss of adhesional strength at relatively low exposures. Reactivity of PV module components increases in presence of O2 and H2O. Adhesional strength degrades due to the breakdown of structure of polymer by reactive, free radicals formed by high-energy radiation. Free radical formation in polymers is reduced when the aromatic rings are attached at regular intervals. This paper will review flexible packaging for PV modules.

  4. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  5. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  6. Status of TRANSP Parallel Services

    NASA Astrophysics Data System (ADS)

    Indireshkumar, K.; Andre, Robert; McCune, Douglas; Randerson, Lewis

    2006-10-01

    The PPPL TRANSP code suite has been used successfully over many years to carry out time dependent simulations of tokamak plasmas. However, accurately modeling certain phenomena such as RF heating and fast ion behavior using TRANSP requires extensive computational power and will benefit from parallelization. Parallelizing all of TRANSP is not required and parts will run sequentially while other parts run parallelized. To efficiently use a site's parallel services, the parallelized TRANSP modules are deployed to a shared ``parallel service'' on a separate cluster. The PPPL Monte Carlo fast ion module NUBEAM and the MIT RF module TORIC are the first TRANSP modules to be so deployed. This poster will show the performance scaling of these modules within the parallel server. Communications between the serial client and the parallel server will be described in detail, and measurements of startup and communications overhead will be shown. Physics modeling benefits for TRANSP users will be assessed.

  7. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  8. Truss Performance and Packaging Metrics

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M.; Collins, Timothy J.; Doggett, William; Dorsey, John; Watson, Judith

    2006-01-01

    In the present paper a set of performance metrics are derived from first principals to assess the efficiency of competing space truss structural concepts in terms of mass, stiffness, and strength, for designs that are constrained by packaging. The use of these performance metrics provides unique insight into the primary drivers for lowering structural mass and packaging volume as well as enabling quantitative concept performance evaluation and comparison. To demonstrate the use of these performance metrics, data for existing structural concepts are plotted and discussed. Structural performance data is presented for various mechanical deployable concepts, for erectable structures, and for rigidizable structures.

  9. The role of packaging film permselectivity in modified atmosphere packaging.

    PubMed

    Al-Ati, Tareq; Hotchkiss, Joseph H

    2003-07-02

    Modified atmosphere packaging (MAP) is commercially used to increase the shelf life of packaged produce by reducing the produce respiration rate, delaying senescence, and inhibiting the growth of many spoilage organisms, ultimately increasing product shelf life. MAP systems typically optimize O(2) levels to achieve these effects while preventing anaerobic fermentation but fail to optimize CO(2) concentrations. Altering film permselectivity (i.e., beta, which is the ratio of CO(2)/O(2) permeation coefficients) could be utilized to concurrently optimize levels of both CO(2) and O(2) in MAP systems. We investigated the effect of modifying film permselectivity on the equilibrium gas composition of a model MAP produce system packaged in containers incorporating modified poly(ethylene) ionomer films with CO(2)/O(2) permselectivites between 4-5 and 0.8-1.3. To compare empirical to calculated data of the effect of permselectivity on the equilibrium gas composition of the MAP produce system, a mathematical model commonly used to optimize MAP of respiring produce was applied. The calculated gas composition agreed with observed values, using empirical respiration data from fresh cut apples as a test system and permeability data from tested and theoretical films. The results suggest that packaging films with CO(2)/O(2) permselectivities lower than those commercially available (<3) would further optimize O(2) and CO(2) concentration in MAP of respiring produce, particularly highly respiring and minimally processed produce.

  10. Parallel Debugging Using Graphical Views

    DTIC Science & Technology

    1988-03-01

    Voyeur , a prototype system for creating graphical views of parallel programs, provid(s a cost-effective way to construct such views for any parallel...programming system. We illustrate Voyeur by discussing four views created for debugging Poker programs. One is a vteneral trace facility for any Poker...Graphical views are essential for debugging parallel programs because of the large quan- tity of state information contained in parallel programs. Voyeur

  11. Vacuum-Packaging Technology for IRFPAs

    NASA Astrophysics Data System (ADS)

    Matsumura, Takeshi; Tokuda, Takayuki; Tsutinaga, Akinobu; Kimata, Masafumi; Abe, Hideyuki; Tokashiki, Naotaka

    We developed vacuum-packaging equipment and low-cost vacuum packaging technology for IRFPAs. The equipment is versatile and can process packages with various materials and structures. Getters are activated before vacuum packaging, and we can solder caps/ceramic-packages and caps/windows in a high-vacuum condition using this equipment. We also developed a micro-vacuum gauge to measure pressure in vacuum packages. The micro-vacuum gauge uses the principle of thermal conduction of gases. We use a multi-ceramic package that consists of six packages fabricated on a ceramic sheet, and confirm that the pressure in the processed packages is sufficiently low for high-performance IRFPA.

  12. Package for integrated optic circuit and method

    DOEpatents

    Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.

    1998-08-04

    A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.

  13. Package for integrated optic circuit and method

    DOEpatents

    Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.

    1998-01-01

    A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.

  14. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  15. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  16. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  17. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  18. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  19. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  20. Determination of activation energy of pyrolysis of carton packaging wastes and its pure components using thermogravimetry.

    PubMed

    Alvarenga, Larissa M; Xavier, Thiago P; Barrozo, Marcos Antonio S; Bacelos, Marcelo S; Lira, Taisa S

    2016-07-01

    Many processes have been used for recycling of carton packaging wastes. The pyrolysis highlights as a promising technology to be used for recovering the aluminum from polyethylene and generating products with high heating value. In this paper, a study on pyrolysis reactions of carton packaging wastes and its pure components was performed in order to estimate the kinetic parameters of these reactions. For this, dynamic thermogravimetric analyses were carried out and two different kinds of kinetic models were used: the isoconversional and Independent Parallel Reactions. Isoconversional models allowed to calculate the overall activation energy of the pyrolysis reaction, in according to their conversions. The IPR model, in turn, allowed the calculation of kinetic parameters of each one of the carton packaging and paperboard subcomponents. The carton packaging pyrolysis follows three separated stages of devolatilization. The first step is moisture loss. The second stage is perfectly correlated to devolatilization of cardboard. The third step is correlated to devolatilization of polyethylene.

  1. Hanford Site radioactive hazardous materials packaging directory

    SciTech Connect

    McCarthy, T.L.

    1995-12-01

    The Hanford Site Radioactive Hazardous Materials Packaging Directory (RHMPD) provides information concerning packagings owned or routinely leased by Westinghouse Hanford Company (WHC) for offsite shipments or onsite transfers of hazardous materials. Specific information is provided for selected packagings including the following: general description; approval documents/specifications (Certificates of Compliance and Safety Analysis Reports for Packaging); technical information (drawing numbers and dimensions); approved contents; areas of operation; and general information. Packaging Operations & Development (PO&D) maintains the RHMPD and may be contacted for additional information or assistance in obtaining referenced documentation or assistance concerning packaging selection, availability, and usage.

  2. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  3. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  4. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  5. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  6. ULFEM time series analysis package

    USGS Publications Warehouse

    Karl, Susan M.; McPhee, Darcy K.; Glen, Jonathan M. G.; Klemperer, Simon L.

    2013-01-01

    This manual describes how to use the Ultra-Low-Frequency ElectroMagnetic (ULFEM) software package. Casual users can read the quick-start guide and will probably not need any more information than this. For users who may wish to modify the code, we provide further description of the routines.

  7. The Macro - Games Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    Part of an Economic Education Series, the course package is designed to teach basic concepts and fundamental principles of macroeconomics and how they can be applied to various world problems. For use with college students, learning is gained through lectures, discussion, simulation games, programmed learning, and text. Time allotment is a 15-week…

  8. Food Nanotechnology: Food Packaging Applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Astonishing growth in the market for nanofoods is predicted in the future, from the current market of $2.6 billion to $20.4 billion in 2010. The market for nanotechnology in food packaging alone is expected to reach $360 million in 2008. In large part the impetus for this predicted growth is the e...

  9. Food Nanotechnology - Food Packaging Applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Astonishing growth in the market for nanofoods is predicted in the future, from the current market of $2.6 billion to $20.4 billion in 2010. The market for nanotechnology in food packaging alone is expected to reach $360 million in 2008. In large part, the impetus for this predicted growth is the ...

  10. RAGG - R EPISODIC AGGREGATION PACKAGE

    EPA Science Inventory

    The RAGG package is an R implementation of the CMAQ episodic model aggregation method developed by Constella Group and the Environmental Protection Agency. RAGG is a tool to provide climatological seasonal and annual deposition of sulphur and nitrogen for multimedia management. ...

  11. COLDMON -- Cold File Analysis Package

    NASA Astrophysics Data System (ADS)

    Rawlinson, D. J.

    The COLDMON package has been written to allow system managers to identify those items of software that are not used (or used infrequently) on their systems. It consists of a few command procedures and a Fortran program to analyze the results. It makes use of the AUDIT facility and security ACLs in VMS.

  12. The Canon package: a fast kernel for tensor manipulators

    NASA Astrophysics Data System (ADS)

    Manssur, L. R. U.; Portugal, R.

    2004-02-01

    This paper describes the Canon package written in the Maple programming language. Canon's purpose is to work as a kernel for complete Maple tensor packages or any Maple package for manipulating indexed objects obeying generic permutation symmetries and possibly having dummy indices. Canon uses Computational Group Theory algorithms to efficiently simplify or manipulate generic tensor expressions. We describe the main command to access the package, give examples, and estimate typical computation timings. Program summaryTitle of program: Canon Catalogue identifier: ADSP Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSP Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: any machine running Maple versions 6 to 9 Operating systems under which the program has been tested: Microsoft Windows, Linux Programming language used: Maple Memory required to execute with typical data: up to 10 Mb No. of bits in word: 32 or 64 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 45 910 Distribution format: tar gzip file Nature of physical problem: Manipulation and simplification of tensor expressions (or any expression in terms of indexed objects) in explicit index notation, where the indices obey generic permutation symmetries and there may exist dummy (summed over) indices. Method of solution: Computational Group Theory algorithms have been used, specially algorithms for finding canonical representations of single and double cosets, and algorithms for creating strong generating sets. Restriction on the complexity of the problem: Computer memory. With current equipment, expressions with hundreds of indices have been manipulated successfully. Typical running time: Simplification of expressions with 15 Riemann tensors was done in less than one minute in a personal computer. Unusual features: The use of Computational Group Theory algorithms

  13. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  14. Automated packaging employing real-time vision

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Chung; Wu, Chia-Hung

    2016-07-01

    Existing packaging systems rely on human operation to position a box in the packaging device perform do the packaging task. Current facilities are not capable of handling boxes with different sizes in a flexible way. In order to improve the above-mentioned problems, an eye-to-hand visual servo automated packaging approach is proposed in this paper. The system employs two cameras to observe the box and the gripper mounted on the robotic manipulator to precisely control the manipulator to complete the packaging task. The system first employs two-camera vision to determine the box pose. With appropriate task encoding, a closed-loop visual servoing controller is designed to drive a manipulator to accomplish packaging tasks. The proposed approach can be used to complete automated packaging tasks in the case of uncertain location and size of the box. The system has been successfully validated by experimenting with an industrial robotic manipulator for postal box packaging.

  15. Sensory impacts of food-packaging interactions.

    PubMed

    Duncan, Susan E; Webster, Janet B

    2009-01-01

    Sensory changes in food products result from intentional or unintentional interactions with packaging materials and from failure of materials to protect product integrity or quality. Resolving sensory issues related to plastic food packaging involves knowledge provided by sensory scientists, materials scientists, packaging manufacturers, food processors, and consumers. Effective communication among scientists and engineers from different disciplines and industries can help scientists understand package-product interactions. Very limited published literature describes sensory perceptions associated with food-package interactions. This article discusses sensory impacts, with emphasis on oxidation reactions, associated with the interaction of food and materials, including taints, scalping, changes in food quality as a function of packaging, and examples of material innovations for smart packaging that can improve sensory quality of foods and beverages. Sensory evaluation is an important tool for improved package selection and development of new materials.

  16. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  17. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  18. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  19. A parallel programming environment supporting multiple data-parallel modules

    SciTech Connect

    Seevers, B.K.; Quinn, M.J. ); Hatcher, P.J. )

    1992-10-01

    We describe a system that allows programmers to take advantage of both control and data parallelism through multiple intercommunicating data-parallel modules. This programming environment extends C-type stream I/O to include intermodule communication channels. The progammer writes each module as a separate data-parallel program, then develops a channel linker specification describing how to connect the modules together. A channel linker we have developed loads the separate modules on the parallel machine and binds the communication channels together as specified. We present performance data that demonstrates a mixed control- and data-parallel solution can yield better performance than a strictly data-parallel solution. The system described currently runs on the Intel iWarp multicomputer.

  20. Parallel imaging microfluidic cytometer.

    PubMed

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

  1. English III. Teacher's Guide [and Student Workbook]. Revised. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Atkinson, Missy; Fresen, Sue; Goldstein, Jeren; Harrell, Stephanie; MacEnulty, Patricia; McLain, Janice

    This teacher's guide and student workbook are part of a series of content-centered supplementary curriculum packages of alternative methods and activities designed to help secondary students who have disabilities and those with diverse learning needs succeed in regular education content courses. The content of Parallel Alternative Strategies for…

  2. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    SciTech Connect

    Koniges, A.

    1996-02-09

    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  3. Introduction to Computers: Parallel Alternative Strategies for Students. Course No. 0200000.

    ERIC Educational Resources Information Center

    Chauvenne, Sherry; And Others

    Parallel Alternative Strategies for Students (PASS) is a content-centered package of alternative methods and materials designed to assist secondary teachers to meet the needs of mainstreamed learning-disabled and emotionally-handicapped students of various achievement levels in the basic education content courses. This supplementary text and…

  4. 21 CFR 820.130 - Device packaging.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Device packaging. 820.130 Section 820.130 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Labeling and Packaging Control § 820.130 Device packaging. Each...

  5. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  6. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  7. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  8. 27 CFR 19.276 - Package scales.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Package scales. 19.276... Package scales. Proprietors shall ensure the accuracy of scales used for weighing packages of spirits through tests conducted at intervals of not more than 6 months or whenever scales are adjusted or...

  9. 27 CFR 19.186 - Package scales.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Package scales. 19.186... Package Scale and Pipeline Requirements § 19.186 Package scales. Proprietors must ensure that scales used.... However, if a scale is not used during a 6-month period, it is only necessary to test the scale prior...

  10. 27 CFR 6.93 - Combination packaging.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Combination packaging. 6..., DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and distributing distilled spirits, wine, or malt beverages in...

  11. 9 CFR 354.72 - Packaging.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Packaging. 354.72 Section 354.72... CERTIFICATION VOLUNTARY INSPECTION OF RABBITS AND EDIBLE PRODUCTS THEREOF Supervision of Marking and Packaging § 354.72 Packaging. No container which bears or may bear any official identification or any...

  12. 7 CFR 58.640 - Packaging.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Packaging. 58.640 Section 58.640 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.640 Packaging. The packaging of the semifrozen product shall be done by means which will...

  13. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Packaging conditions. 355.20 Section 355.20 Food... HUMAN USE ANTICARIES DRUG PRODUCTS FOR OVER-THE-COUNTER HUMAN USE Active Ingredients § 355.20 Packaging... accord with § 355.60. (b) Tight container packaging. To minimize moisture contamination, all...

  14. 49 CFR 172.514 - Bulk packagings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Bulk packagings. 172.514 Section 172.514... SECURITY PLANS Placarding § 172.514 Bulk packagings. (a) Except as provided in paragraph (c) of this section, each person who offers for transportation a bulk packaging which contains a hazardous...

  15. 76 FR 30551 - Specifications for Packagings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-26

    ... Pipeline and Hazardous Materials Safety Administration 49 CFR Part 178 Specifications for Packagings CFR... on a packaging, a test report must be prepared. The test report must be maintained at each location where the packaging is manufactured and each location where the design qualification tests are...

  16. YUCCA MOUNTAIN WASTE PACKAGE CLOSURE SYSTEM

    SciTech Connect

    G. Housley; C. Shelton-davis; K. Skinner

    2005-08-26

    The method selected for dealing with spent nuclear fuel in the US is to seal the fuel in waste packages and then to place them in an underground repository at the Yucca Mountain Site in Nevada. This article describes the Waste Package Closure System (WPCS) currently being designed for sealing the waste packages.

  17. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  18. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  19. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  20. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  1. 16 CFR 1702.12 - Packaging specifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Packaging specifications. 1702.12 Section 1702.12 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION POISON PREVENTION PACKAGING ACT OF 1970 REGULATIONS PETITIONS FOR EXEMPTIONS FROM POISON PREVENTION PACKAGING ACT REQUIREMENTS; PETITION...

  2. 9 CFR 317.24 - Packaging materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... supplier under whose brand name and firm name the material is marketed to the official establishment. The... packaging materials must be traceable to the applicable guaranty. (c) The guaranty by the packaging supplier.... Official establishments and packaging suppliers providing written guaranties to those...

  3. 9 CFR 381.144 - Packaging materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., from the packaging supplier under whose brand name and firm name the material is marketed to the... packaging supplier will be accepted by Program inspectors to establish that the use of material complies.... Official establishments and packaging suppliers providing written guaranties to those...

  4. 7 CFR 993.22 - Consumer package.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Consumer package. 993.22 Section 993.22 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 993.22 Consumer package. Consumer package means: (a)...

  5. 7 CFR 65.130 - Consumer package.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Consumer package. 65.130 Section 65.130 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards..., PEANUTS, AND GINSENG General Provisions Definitions § 65.130 Consumer package. Consumer package means...

  6. 10 CFR 71.35 - Package evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... REGULATORY COMMISSION (CONTINUED) PACKAGING AND TRANSPORTATION OF RADIOACTIVE MATERIAL Application for... fissile material package, the allowable number of packages that may be transported in the same vehicle in accordance with § 71.59; and (c) For a fissile material shipment, any proposed special controls...

  7. 21 CFR 820.130 - Device packaging.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Device packaging. 820.130 Section 820.130 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Labeling and Packaging Control § 820.130 Device packaging. Each...

  8. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  9. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  10. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  11. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  12. 7 CFR 932.9 - Packaged olives.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Packaged olives. 932.9 Section 932.9 Agriculture... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Definitions § 932.9 Packaged olives. Packaged olives means (a) processed olives...

  13. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SHIPMENTS AND PACKAGINGS Definitions, Classification and Packaging for Class 1 § 173.63 Packaging exceptions...-shore supply vessel; (3) Cargo compartment of a cargo vessel; or (4) Passenger-carrying aircraft used to...) criteria for reclassification as limited quantity material for transportation by highway, rail or...

  14. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... waste, marine pollutant, or is offered for transportation and transported byaircraft or vessel... SHIPMENTS AND PACKAGINGS Definitions, Classification and Packaging for Class 1 § 173.63 Packaging exceptions...-shore supply vessel; (3) Cargo compartment of a cargo vessel; or (4) Passenger-carrying aircraft used...

  15. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SHIPMENTS AND PACKAGINGS Definitions, Classification and Packaging for Class 1 § 173.63 Packaging exceptions...-shore supply vessel; (3) Cargo compartment of a cargo vessel; or (4) Passenger-carrying aircraft used to... hazardous substance, hazardous waste, marine pollutant, or is offered for transportation and transported...

  16. 49 CFR 173.63 - Packaging exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY... SHIPMENTS AND PACKAGINGS Definitions, Classification and Packaging for Class 1 § 173.63 Packaging exceptions...-shore supply vessel; (3) Cargo compartment of a cargo vessel; or (4) Passenger-carrying aircraft used...

  17. EDExpress Packaging Training, 2001-2002.

    ERIC Educational Resources Information Center

    Office of Student Financial Assistance (ED), Washington, DC.

    Packaging is the process of finding the best combination of aid to meet a student's financial need for college, given limited resources and the institutional constraints that vary from school to school. This guide to packaging under the EDExpress software system outlines three steps to packaging. The first is determining the student's need for…

  18. 9 CFR 317.24 - Packaging materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Packaging materials. 317.24 Section... INSPECTION AND CERTIFICATION LABELING, MARKING DEVICES, AND CONTAINERS General § 317.24 Packaging materials... packaging materials must be safe for their intended use within the meaning of section 409 of the...

  19. 9 CFR 317.24 - Packaging materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Packaging materials. 317.24 Section... INSPECTION AND CERTIFICATION LABELING, MARKING DEVICES, AND CONTAINERS General § 317.24 Packaging materials... packaging materials must be safe for their intended use within the meaning of section 409 of the...

  20. 9 CFR 317.24 - Packaging materials.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Packaging materials. 317.24 Section... INSPECTION AND CERTIFICATION LABELING, MARKING DEVICES, AND CONTAINERS General § 317.24 Packaging materials... packaging materials must be safe for their intended use within the meaning of section 409 of the...

  1. 9 CFR 317.24 - Packaging materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Packaging materials. 317.24 Section... INSPECTION AND CERTIFICATION LABELING, MARKING DEVICES, AND CONTAINERS General § 317.24 Packaging materials... packaging materials must be safe for their intended use within the meaning of section 409 of the...

  2. 7 CFR 993.22 - Consumer package.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Consumer package. 993.22 Section 993.22 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 993.22 Consumer package. Consumer package means: (a)...

  3. 7 CFR 65.130 - Consumer package.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Consumer package. 65.130 Section 65.130 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards..., PEANUTS, AND GINSENG General Provisions Definitions § 65.130 Consumer package. Consumer package means...

  4. Efficient parallel simulation of CO2 geologic sequestration insaline aquifers

    SciTech Connect

    Zhang, Keni; Doughty, Christine; Wu, Yu-Shu; Pruess, Karsten

    2007-01-01

    An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The new parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.

  5. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  6. Chip Scale Package Implementation Challenges

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1998-01-01

    The JPL-led MicrotypeBGA Consortium of enterprises representing government agencies and private companies have jointed together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects. In the process of building the Consortium CSP test vehicles, many challenges were identified regarding various aspects of technology implementation. This paper will present our experience in the areas of technology implementation challenges, including design and building both standard and microvia boards, and assembly of two types of test vehicles. We also discuss the most current package isothermal aging to 2,000 hours at 100 C and 125 C and thermal cycling test results to 1,700 cycles in the range of -30 to 100 C.

  7. Parallel processing of atmospheric chemistry calculations: Preliminary considerations

    SciTech Connect

    Elliott, S.; Jones, P.

    1995-01-01

    Global climate calculations are already saturating the class modern vector supercomputers with only a few central processing units. Increased resolution and inclusion of routines to deal with biogeochemical portions of the terrestrial climate system will soon demand massively parallel approaches. The atmospheric photochemistry ensemble is intimately linked to climate through the trace greenhouse gases ozone and methane and modules for representing it are being attached to global three dimensional transport and GCM frameworks. Atmospheric kinetics involve dozens of highly interactive tracers and so will accentuate the need for parallel processing of earth system simulations. In the present text we lay some of the groundwork for addition of atmospheric kinetics packages to GCM and global scale atmospheric models on multiply parallel computers. The discussion is tailored for consumption by the photochemical modelling community. After a review of numerical atmospheric chemistry methods, we examine how kinetics can be implemented on a parallel computer. We concentrate especially on data layout and flexibility and how these can be implemented in various programming models. We conclude that chemistry can be implemented rather easily within existing frameworks of several parallel atmospheric models. However, memory limitations may preclude high resolution studies of global chemistry.

  8. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  9. Transportation and packaging resource guide

    SciTech Connect

    Arendt, J.W.; Gove, R.M.; Welch, M.J.

    1994-12-01

    The purpose of this resource guide is to provide a convenient reference document of information that may be useful to the U.S. Department of Energy (DOE) and DOE contractor personnel involved in packaging and transportation activities. An attempt has been made to present the terminology of DOE community usage as it currently exists. DOE`s mission is changing with emphasis on environmental cleanup. The terminology or nomenclature that has resulted from this expanded mission is included for the packaging and transportation user for reference purposes. Older terms still in use during the transition have been maintained. The Packaging and Transportation Resource Guide consists of four sections: Sect. 1, Introduction; Sect. 2, Abbreviations and Acronyms; Sect. 3, Definitions; and Sect. 4, References for packaging and transportation of hazardous materials and related activities, and Appendices A and B. Information has been collected from DOE Orders and DOE documents; U.S Department of Transportation (DOT), U.S. Environmental Protection Agency (EPA), and U.S. Nuclear Regulatory Commission (NRC) regulations; and International Atomic Energy Agency (IAEA) standards and other international documents. The definitions included in this guide may not always be a regulatory definition but are the more common DOE usage. In addition, the definitions vary among regulatory agencies. It is, therefore, suggested that if a definition is to be used in a regulatory or a legal compliance issue, the definition should be verified with the appropriate regulation. To assist in locating definitions in the regulations, a listing of all definition sections in the regulations are included in Appendix B. In many instances, the appropriate regulatory reference is indicated in the right-hand margin.

  10. Microwave thawing package and method

    DOEpatents

    Fathi, Zakaryae; Lauf, Robert J.

    2004-03-16

    A package for containing frozen liquids during an electromagnetic thawing process includes: a first section adapted for containing a frozen material and exposing the frozen material to electromagnetic energy; a second section adapted for receiving thawed liquid material and shielding the thawed liquid material from further exposure to electromagnetic energy; and a fluid communication means for allowing fluid flow between the first section and the second section.

  11. Superfund overview: Fact sheet package

    SciTech Connect

    Not Available

    1990-11-01

    The package consists of a series of one-page, concise, public-oriented discussions of the various Superfund issues. They are: The Challenge of Superfund, History of Superfund, The Superfund Cleanup Process, Superfund: Fact vs. Fiction, Progress in Cleanup: FY 1980 - FY 1990, FY '90 Superfund Successes, Who Pays for Superfund, Superfund Enforcement - Making Polluters Pay, Superfund Blueprint, Superfund Contracts, Superfund Technology, and Superfund: Future Strategy and Directions.

  12. Hydraulics Graphics Package. Users Manual

    DTIC Science & Technology

    1985-11-01

    Engineering Center, Corps of Engineers, Department of the Army, as the origin of the program(s). IT, i HGP -3!OO Ju ’ ŕ Dlst! 3pbo.i:, HYDRAULICS GRAPHICS...Davis, California 95616 (916) 551-1748 (FTS) 460-1748 HGP Hydraulics Graphics Package Users Manual TABLE OF CONTENTS Chapter Subject Page 1 Introduction...5 2.4 Use of Disk Files ........ ................ 6 3 HGP Free Format User Input 3.1 Command Language Syntax ...... ............. 8 3.2

  13. Small Cold Temperature Instrument Packages

    NASA Astrophysics Data System (ADS)

    Clark, P. E.; Millar, P. S.; Yeh, P. S.; Feng, S.; Brigham, D.; Beaman, B.

    We are developing a small cold temperature instrument package concept that integrates a cold temperature power system with ultra low temperature ultra low power electronics components and power supplies now under development into a 'cold temperature surface operational' version of a planetary surface instrument package. We are already in the process of developing a lower power lower temperature version for an instrument of mutual interest to SMD and ESMD to support the search for volatiles (the mass spectrometer VAPoR, Volatile Analysis by Pyrolysis of Regolith) both as a stand alone instrument and as part of an environmental monitoring package. We build on our previous work to develop strategies for incorporating Ultra Low Temperature/Ultra Low Power (ULT/ULP) electronics, lower voltage power supplies, as well as innovative thermal design concepts for instrument packages. Cryotesting has indicated that our small Si RHBD CMOS chips can deliver >80% of room temperature performance at 40K (nominal minimum lunar surface temperature). We leverage collaborations, past and current, with the JPL battery development program to increase power system efficiency in extreme environments. We harness advances in MOSFET technology that provide lower voltage thresholds for power switching circuits incorporated into our low voltage power supply concept. Conventional power conversion has a lower efficiency. Our low power circuit concept based on 'synchronous rectification' could produce stable voltages as low as 0.6 V with 85% efficiency. Our distributed micro-battery-based power supply concept incorporates cold temperature power supplies operating with a 4 V or 8 V battery. This work will allow us to provide guidelines for applying the low temperature, low power system approaches generically to the widest range of surface instruments.

  14. The `TTIME' Package: Performance Evaluation in a Cluster Computing Environment

    NASA Astrophysics Data System (ADS)

    Howe, Marico; Berleant, Daniel; Everett, Albert

    2011-06-01

    The objective of translating developmental event time across mammalian species is to gain an understanding of the timing of human developmental events based on known time of those events in animals. The potential benefits include improvements to diagnostic and intervention capabilities. The CRAN `ttime' package provides the functionality to infer unknown event timings and investigate phylogenetic proximity utilizing hierarchical clustering of both known and predicted event timings. The original generic mammalian model included nine eutherian mammals: Felis domestica (cat), Mustela putorius furo (ferret), Mesocricetus auratus (hamster), Macaca mulatta (monkey), Homo sapiens (humans), Mus musculus (mouse), Oryctolagus cuniculus (rabbit), Rattus norvegicus (rat), and Acomys cahirinus (spiny mouse). However, the data for this model is expected to grow as more data about developmental events is identified and incorporated into the analysis. Performance evaluation of the `ttime' package across a cluster computing environment versus a comparative analysis in a serial computing environment provides an important computational performance assessment. A theoretical analysis is the first stage of a process in which the second stage, if justified by the theoretical analysis, is to investigate an actual implementation of the `ttime' package in a cluster computing environment and to understand the parallelization process that underlies implementation.

  15. Generalized waste package containment model

    SciTech Connect

    Liebetrau, A.M.; Apted, M.J.

    1985-02-01

    The US Department of Energy (DOE) is developing a performance assessment strategy to demonstrate compliance with standards and technical requirements of the Environmental Protection Agency (EPA) and the Nuclear Regulatory Commission (NRC) for the permanent disposal of high-level nuclear wastes in geologic repositories. One aspect of this strategy is the development of a unified performance model of the entire geologic repository system. Details of a generalized waste package containment (WPC) model and its relationship with other components of an overall repository model are presented in this paper. The WPC model provides stochastically determined estimates of the distributions of times-to-failure of the barriers of a waste package by various corrosion mechanisms and degradation processes. The model consists of a series of modules which employ various combinations of stochastic (probabilistic) and mechanistic process models, and which are individually designed to reflect the current state of knowledge. The WPC model is designed not only to take account of various site-specific conditions and processes, but also to deal with a wide range of site, repository, and waste package configurations. 11 refs., 3 figs., 2 tabs.

  16. Active packaging with antifungal activities.

    PubMed

    Nguyen Van Long, N; Joly, Catherine; Dantigny, Philippe

    2016-03-02

    There have been many reviews concerned with antimicrobial food packaging, and with the use of antifungal compounds, but none provided an exhaustive picture of the applications of active packaging to control fungal spoilage. Very recently, many studies have been done in these fields, therefore it is timely to review this topic. This article examines the effects of essential oils, preservatives, natural products, chemical fungicides, nanoparticles coated to different films, and chitosan in vitro on the growth of moulds, but also in vivo on the mould free shelf-life of bread, cheese, and fresh fruits and vegetables. A short section is also dedicated to yeasts. All the applications are described from a microbiological point of view, and these were sorted depending on the name of the species. Methods and results obtained are discussed. Essential oils and preservatives were ranked by increased efficacy on mould growth. For all the tested molecules, Penicillium species were shown more sensitive than Aspergillus species. However, comparison between the results was difficult because it appeared that the efficiency of active packaging depended greatly on the environmental factors of food such as water activity, pH, temperature, NaCl concentration, the nature, the size, and the mode of application of the films, in addition to the fact that the amount of released antifungal compounds was not constant with time.

  17. Waste Package Design Methodology Report

    SciTech Connect

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  18. Hazardous Material Packaging and Transportation

    SciTech Connect

    Hypes, Philip A.

    2016-02-04

    This is a student training course. Some course objectives are to: recognize and use standard international and US customary units to describe activities and exposure rates associated with radioactive material; determine whether a quantity of a single radionuclide meets the definition of a class 7 (radioactive) material; determine, for a given single radionuclide, the shipping quantity activity limits per 49 Code of Federal Regulations (CFR) 173.435; determine the appropriate radioactive material hazard class proper shipping name for a given material; determine when a single radionuclide meets the DOT definition of a hazardous substance; determine the appropriate packaging required for a given radioactive material; identify the markings to be placed on a package of radioactive material; determine the label(s) to apply to a given radioactive material package; identify the entry requirements for radioactive material labels; determine the proper placement for radioactive material label(s); identify the shipping paper entry requirements for radioactive material; select the appropriate placards for a given radioactive material shipment or vehicle load; and identify allowable transport limits and unacceptable transport conditions for radioactive material.

  19. Radioactive material package seal tests

    SciTech Connect

    Madsen, M.M.; Humphreys, D.L.; Edwards, K.R.

    1990-01-01

    General design or test performance requirements for radioactive materials (RAM) packages are specified in Title 10 of the US Code of Federal Regulations Part 71 (US Nuclear Regulatory Commission, 1983). The requirements for Type B packages provide a broad range of environments under which the system must contain the RAM without posing a threat to health or property. Seals that provide the containment system interface between the packaging body and the closure must function in both high- and low-temperature environments under dynamic and static conditions. A seal technology program, jointly funded by the US Department of Energy Office of Environmental Restoration and Waste Management (EM) and the Office of Civilian Radioactive Waste Management (OCRWM), was initiated at Sandia National Laboratories. Experiments were performed in this program to characterize the behavior of several static seal materials at low temperatures. Helium leak tests on face seals were used to compare the materials. Materials tested include butyl, neoprene, ethylene propylene, fluorosilicone, silicone, Eypel, Kalrez, Teflon, fluorocarbon, and Teflon/silicone composites. Because most elastomer O-ring applications are for hydraulic systems, manufacturer low-temperature ratings are based on methods that simulate this use. The seal materials tested in this program with a fixture similar to a RAM cask closure, with the exception of silicone S613-60, are not leak tight (1.0 {times} 10{sup {minus}7} std cm{sup 3}/s) at manufacturer low-temperature ratings. 8 refs., 3 figs., 1 tab.

  20. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  1. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  2. ARPREC: An arbitrary precision computation package

    SciTech Connect

    Bailey, David H.; Yozo, Hida; Li, Xiaoye S.; Thompson, Brandon

    2002-09-01

    This paper describes a new software package for performing arithmetic with an arbitrarily high level of numeric precision. It is based on the earlier MPFUN package, enhanced with special IEEE floating-point numerical techniques and several new functions. This package is written in C++ code for high performance and broad portability and includes both C++ and Fortran-90 translation modules, so that conventional C++ and Fortran-90 programs can utilize the package with only very minor changes. This paper includes a survey of some of the interesting applications of this package and its predecessors.

  3. The Model 9977 Radioactive Material Packaging Primer

    SciTech Connect

    Abramczyk, G.

    2015-10-09

    The Model 9977 Packaging is a single containment drum style radioactive material (RAM) shipping container designed, tested and analyzed to meet the performance requirements of Title 10 the Code of Federal Regulations Part 71. A radioactive material shipping package, in combination with its contents, must perform three functions (please note that the performance criteria specified in the Code of Federal Regulations have alternate limits for normal operations and after accident conditions): Containment, the package must “contain” the radioactive material within it; Shielding, the packaging must limit its users and the public to radiation doses within specified limits; and Subcriticality, the package must maintain its radioactive material as subcritical

  4. Examination of SR101 shipping packages

    SciTech Connect

    Daugherty, W. L.

    2015-03-01

    Four SR101 shipping packages were removed from service and provided for disassembly and examination of the internal fiberboard assemblies. These packages were 20 years old, and had experienced varying levels of degradation. Two of the packages were successfully disassembled and fiberboard samples were removed from these packages and tested. Mechanical and thermal property values are generally comparable to or higher than baseline values measured on fiberboard from 9975 packages, which differs primarily in the specified density range. While baseline data for the SR101 material is not available, this comparison with 9975 material suggests that the material properties of the SR101 fiberboard have not significantly degraded.

  5. UWV (Unmanned Water Vehicle) - Umbra Package v. 1.0

    SciTech Connect

    Fred Oppel, SNL 06134

    2012-09-13

    This package contains modules that model the mobility of systems moving in the water. This package currently models first order physics -basically a velocity integrator. This package depends on interface classes (typically base classes) that reside in the Mobility package.

  6. 49 CFR 178.915 - General Large Packaging standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Large Packaging. (d) A Large Packaging consisting of packagings within a framework must be so constructed that the packaging is not damaged by the framework and is retained within the framework at...

  7. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  8. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  9. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  10. NEUREC - a program package for 3D-reconstruction from serial sections using a microcomputer.

    PubMed

    Gras, H; Killmann, F

    1983-01-01

    A software package is described to reconstruct three-dimensional pictures in true perspective from a series of parallel sections using a low-cost computer system (Apple II plus). Data sampling via a graphic tablet and graphical output on the monitor screen or a digital plotter are assigned to different programs under control of a menu program. The number of data representing the object under study is unlimited. Originally written in BASIC, the programs were translated to machine language. As an application of the package, reconstructions of an identified large interneuron of the locust brain are presented.

  11. Examination of shipping package 9975-02403

    SciTech Connect

    Daugherty, W. L.

    2016-03-01

    SRNL examined shipping package 9975-02403 following storage of nuclear material in K-Area Complex (KAC). As a result of field surveillance activities in KAC, this package was identified to contain several non-conforming and other conditions. Further examination of this package in SRNL confirmed significant moisture and mold in the bottom layers of the lower fiberboard assembly, and identified additional corrosion along the seam weld and on the bottom of the drum. It was recently recommended that checking for corrosion along the bottom edge of the drum be implemented for packages that are removed from storage, as well as high wattage packages remaining in storage. The appearance of such corrosion on 9975-02403 further indicates that such corrosion may provide an indication of significant moisture concentration and related degradation within the package. This condition is more likely to develop in packages with higher internal heat loads.

  12. EXAMINATION OF SHIPPING PACKAGE 9975-05050

    SciTech Connect

    Daugherty, W.

    2014-11-06

    Shipping package 9975-05050 was examined in K-Area following its identification as a high wattage package. Elevated temperature and fiberboard moisture content are key parameters that impact the degradation rate of fiberboard within 9975 packages in a storage environment. The high wattage of this package contributes significantly to component temperatures. After examination in K-Area, the package was provided to SRNL for further examination of the fiberboard assembly. The moisture content of the fiberboard was relatively low (compared to packages examined previously), but the moisture gradient (between fiberboard ID and OD surfaces) was relatively high, as would be expected for the high heat load. The cane fiberboard appeared intact and displayed no apparent change in integrity relative to a new package.

  13. Lunar Dust Analysis Package - LDAP

    NASA Astrophysics Data System (ADS)

    Chalkley, S. A.; Richter, L.; Goepel, M.; Sovago, M.; Pike, W. T.; Yang, S.; Rodenburg, J.; Claus, D.

    2012-09-01

    The Lunar Dust Analysis package (L-DAP) is a suite of payloads which have been designed to operate in synergy with each other at the Lunar Surface. The benefits of combining these payloads in a single package allow very precise measurements of a particular regolith sample. At the same time the integration allows mass savings since common resources are shared and this also means that interfaces with the Lander are simplified significantly leading to benefits of integration and development of the overall mission. Lunar Dust represents a real hazard for lunar exploration due to its invasive, fine microscopic structure and toxic properties. However it is also valuable resource which could be exploited for future exploration if the characteristics and chemical composition is well known. Scientifically, the regolith provides an insight into the moon formation process and there are areas on the Moon which have never been ex-plored before. For example the Lunar South Pole Aitken Basin is the oldest and largest on the moon, providing excavated deep crust which has not been found on the previous lunar landing missions. The SEA-led team has been designing a compact package, known as LDAP, which will provide key data on the lunar dust properties. The intention is for this package to be part of the payload suite deployed on the ESA Lunar Lander Mission in 2018. The LDAP has a centralised power and data electronics, including front end electronics for the detectors as well as sample handling subsystem for the following set of internal instruments : • Optical Microscope - with a 1μm resolution to provide context of the regolith samples • Raman and LIBS spectrographic instrumentation providing quantification of mineral and elemental composition information of the soil at close to grain scale. This includes the capability to detect (and measure abundance of) crystalline and adsorbed volatile phases, from their Raman signature. The LIBS equipment will also allow chemical

  14. Packaging.

    ERIC Educational Resources Information Center

    Fenninger, Peter L.

    1991-01-01

    Explores whether or not colleges and universities accurately present themselves to potential applicants and whether or not students honestly portray themselves to the schools to which they apply. Examines issues involved in recruiting and applying to college. Concludes that the issues of the college admission process are very complex, and that…

  15. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  16. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  17. Radiation-hard/high-speed parallel optical links

    NASA Astrophysics Data System (ADS)

    Gan, K. K.; Buchholz, P.; Heidbrink, S.; Kagan, H. P.; Kass, R. D.; Moore, J.; Smith, D. S.; Vogt, M.; Ziolkowski, M.

    2016-09-01

    We have designed and fabricated a compact parallel optical engine for transmitting data at 5 Gb/s. The device consists of a 4-channel ASIC driving a VCSEL (Vertical Cavity Surface Emitting Laser) array in an optical package. The ASIC is designed using only core transistors in a 65 nm CMOS process to enhance the radiation-hardness. The ASIC contains an 8-bit DAC to control the bias and modulation currents of the individual channels in the VCSEL array. The performance of the optical engine up at 5 Gb/s is satisfactory.

  18. Parallel optics technology assessment for the versatile link project

    SciTech Connect

    Chramowicz, J.; Kwan, S.; Rivera, R.; Prosser, A.; /Fermilab

    2011-01-01

    This poster describes the assessment of commercially available and prototype parallel optics modules for possible use as back end components for the Versatile Link common project. The assessment covers SNAP12 transmitter and receiver modules as well as optical engine technologies in dense packaging options. Tests were performed using vendor evaluation boards (SNAP12) as well as custom evaluation boards (optical engines). The measurements obtained were used to compare the performance of these components with single channel SFP+ components operating at a transmission wavelength of 850 nm over multimode fibers.

  19. Parallelization of Rocket Engine Simulator Software (PRESS)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1997-01-01

    Parallelization of Rocket Engine System Software (PRESS) project is part of a collaborative effort with Southern University at Baton Rouge (SUBR), University of West Florida (UWF), and Jackson State University (JSU). The second-year funding, which supports two graduate students enrolled in our new Master's program in Computer Science at Hampton University and the principal investigator, have been obtained for the period from October 19, 1996 through October 18, 1997. The key part of the interim report was new directions for the second year funding. This came about from discussions during Rocket Engine Numeric Simulator (RENS) project meeting in Pensacola on January 17-18, 1997. At that time, a software agreement between Hampton University and NASA Lewis Research Center had already been concluded. That agreement concerns off-NASA-site experimentation with PUMPDES/TURBDES software. Before this agreement, during the first year of the project, another large-scale FORTRAN-based software, Two-Dimensional Kinetics (TDK), was being used for translation to an object-oriented language and parallelization experiments. However, that package proved to be too complex and lacking sufficient documentation for effective translation effort to the object-oriented C + + source code. The focus, this time with better documented and more manageable PUMPDES/TURBDES package, was still on translation to C + + with design improvements. At the RENS Meeting, however, the new impetus for the RENS projects in general, and PRESS in particular, has shifted in two important ways. One was closer alignment with the work on Numerical Propulsion System Simulator (NPSS) through cooperation and collaboration with LERC ACLU organization. The other was to see whether and how NASA's various rocket design software can be run over local and intra nets without any radical efforts for redesign and translation into object-oriented source code. There were also suggestions that the Fortran based code be

  20. Parallel Computational Protein Design

    PubMed Central

    Zhou, Yichao; Donald, Bruce R.; Zeng, Jianyang

    2016-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab [1] to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE [2] and DEEPer [3] to also consider continuous backbone and side-chain flexibility. PMID:27914056

  1. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  2. A Parallel Particle Swarm Optimizer

    DTIC Science & Technology

    2003-01-01

    by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.

  3. Introducing data parallelism into climate model post-processing through a parallel version of the NCAR Command Language (NCL)

    NASA Astrophysics Data System (ADS)

    Jacob, R. L.; Xu, X.; Krishna, J.; Tautges, T.

    2011-12-01

    The relationship between the needs of post-processing climate model output and the capability of the available tools has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old analysis workflow. The tools used to implement that workflow are now a bottleneck in the climate science discovery processes. This crisis will only worsen as ultra-high resolution global climate models with horizontal scales of 4 km or smaller, running on leadership computing facilities, begin to produce tens to hundreds of terabytes for a single, hundred-year climate simulation. While climate models have used parallelism for several years, the post-processing tools are still mostly single-threaded applications. We have created a Parallel Climate Analysis Library (ParCAL) which implements many common climate analysis operations in a data-parallel fashion using the Message Passing Interface. ParCAL has in turn been built on sophisticated packages for describing grids in parallel (the Mesh Oriented database (MOAB) and for performing vector operations on arbitrary grids (Intrepid). ParCAL is also using parallel I/O through the PnetCDF library. ParCAL has been used to implement a parallel version of the NCAR Command Language (NCL). ParNCL/ParCAL not only speeds up analysis of large datasets but also allows operations to be performed on native grids, eliminating the need to transform everything to latitude-longitude grids. In most cases, users NCL scripts can run unaltered in parallel using ParNCL.

  4. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  5. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  6. Parallel segmentation and rendering using clusters of PCs.

    PubMed

    Blanquer, I; Hernández, V; Ramírez, F J; Vidal, A

    2000-01-01

    Clinics have to deal currently with hundreds of 3D images a day. 3D Medical Images contain a huge amount of data, and thus, very expensive and powerful systems are required in order to process them. The present work shows the features of a software parallel computing package developed at the Universidad Politécnica de Valencia, under the European Project HIPERCIR. http:¿hiperttn.upv.es/hipercir. Project HIPERCIR is aimed at reducing the time and requirements for processing and visualising 3D images with low-cost solutions, such as networks of PCs running standard operating systems (Windows 95/98/NT). This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Universidad Politécnica de Valencia (UPV), experts on biomedical software and radiology clinic experts.

  7. PORTABLE ACOUSTIC MONITORING PACKAGE (PAMP)

    SciTech Connect

    John l. Loth; Gary J. Morris; George M. Palmer; Richard Guiler; Deepak Mehra

    2003-07-01

    The 1st generation acoustic monitoring package was designed to detect and analyze weak acoustic signals inside natural gas transmission lines. Besides a microphone it housed a three-inch diameter aerodynamic acoustic signal amplifier to maximize sensitivity to leak induced {Delta}p type signals. The theory and test results of this aerodynamic signal amplifier was described in the master's degree thesis of our Research Assistant Deepak Mehra who is about to graduate. To house such a large three-inch diameter sensor required the use of a steel 300-psi rated 4 inch weld neck flange, which itself weighed already 29 pounds. The completed 1st generation Acoustic Monitoring Package weighed almost 100 pounds. This was too cumbersome to mount in the field, on an access port at a pipeline shut-off valve. Therefore a 2nd generation and truly Portable Acoustic Monitor was built. It incorporated a fully self-contained {Delta}p type signal sensor, rated for line pressures up to 1000 psi with a base weight of only 6 pounds. This is the Rosemont Inc. Model 3051CD-Range 0, software driven sensor, which is believed to have industries best total performance. Its most sensitive unit was purchased with a {Delta}p range from 0 to 3 inch water. This resulted in the herein described 2nd generation: Portable Acoustic Monitoring Package (PAMP) for pipelines up to 1000 psi. Its 32-pound total weight includes an 18-volt battery. Together with a 3 pound laptop with its 4-channel data acquisition card, completes the equipment needed for field acoustic monitoring of natural gas transmission pipelines.

  8. MMIC Package for Millimeter Wave Frequency

    NASA Technical Reports Server (NTRS)

    Bharj, Sarjit Singh; Yuan, Steve

    1997-01-01

    Princeton Microwave Technology has successfully demonstrated the transfer of technology for the MMIC package. During this contract the package design was licensed from Hughes Aircraft Company for manufacture within the U.S. A major effort was directed towards characterization of the ceramic material for its dielectric constant and loss tangent properties. After selection of a ceramic tape, the high temperature co-fired ceramic package was manufactured in the U.S. by Microcircuit Packaging of America, Inc. Microwave measurements of the MMIC package were conducted by an intercontinental microwave test fixture. The package demonstrated a typical insertion loss of 0.5 dB per transition up to 32 Ghz and a return loss of better than 15 db. The performance of the package has been demonstrated from 2 to 30 Ghz by assembling three different MMIC amplifiers. Two of the MMIC amplifiers were designed for the 26 Ghz to 30 Ghz operation while the third MMIC was a distributed amplifier from 2 to 26.5 Ghz. The measured gain of the amplifier is consistent with the device data. The package costs are substantially lower than comparable packages available commercially. Typically the price difference is greater than a factor of three. The package cost is well under $5.00 for a quantity of 10,000 pieces.

  9. Challenges in the Packaging of MEMS

    SciTech Connect

    BROWN, WILLIAM D.; EATON, WILLIAM P.; MALSHE, AJAY P.; MILLER, WILLIAM M.; O'NEAL, CHAD; SINGH, SUSHILA B.

    1999-09-24

    Microelectromechanical Systems (MEMS) packaging is much different from conventional integrated circuit (IC) packaging. Many MEMS devices must interface to the environment in order to perform their intended function, and the package must be able to facilitate access with the environment while protecting the device. The package must also not interfere with or impede the operation of the MEMS device. The die attachment material should be low stress, and low outgassing, while also minimizing stress relaxation overtime which can lead to scale factor shifts in sensor devices. The fabrication processes used in creating the devices must be compatible with each other, and not result in damage to the devices. Many devices are application specific requiring custom packages that are not commercially available. Devices may also need media compatible packages that can protect the devices from harsh environments in which the MEMS device may operate. Techniques are being developed to handle, process, and package the devices such that high yields of functional packaged parts will result. Currently, many of the processing steps are potentially harmful to MEMS devices and negatively affect yield. It is the objective of this paper to review and discuss packaging challenges that exist for MEMS systems and to expose these issues to new audiences from the integrated circuit packaging community.

  10. Natural biopolimers in organic food packaging

    NASA Astrophysics Data System (ADS)

    Wieczynska, Justyna; Cavoski, Ivana; Chami, Ziad Al; Mondelli, Donato; Di Donato, Paola; Di Terlizzi, Biagio

    2014-05-01

    Concerns on environmental and waste problems caused by use of non-biodegradable and non-renewable based plastic packaging have caused an increase interest in developing biodegradable packaging using renewable natural biopolymers. Recently, different types of biopolymers like starch, cellulose, chitosan, casein, whey protein, collagen, egg white, soybean protein, corn zein, gelatin and wheat gluten have attracted considerable attention as potential food packaging materials. Recyclable or biodegradable packaging material in organic processing standards is preferable where possible but specific principles of packaging are not precisely defined and standards have to be assessed. There is evidence that consumers of organic products have specific expectations not only with respect to quality characteristics of processed food but also in social and environmental aspects of food production. Growing consumer sophistication is leading to a proliferation in food eco-label like carbon footprint. Biopolymers based packaging for organic products can help to create a green industry. Moreover, biopolymers can be appropriate materials for the development of an active surfaces designed to deliver incorporated natural antimicrobials into environment surrounding packaged food. Active packaging is an innovative mode of packaging in which the product and the environment interact to prolong shelf life or enhance safety or sensory properties, while maintaining the quality of the product. The work will discuss the various techniques that have been used for development of an active antimicrobial biodegradable packaging materials focusing on a recent findings in research studies. With the current focus on exploring a new generation of biopolymer-based food packaging materials with possible applications in organic food packaging. Keywords: organic food, active packaging, biopolymers , green technology

  11. Imagecube: an astropy affiliated package

    NASA Astrophysics Data System (ADS)

    Lianou, S.; Barmby, P.; Taylor, J.

    2013-09-01

    Astropy is a community python library for astronomy. Imagecube has been developed as an astropy affiliated package for processing multiwavelength (spectro)-imaging. This module automates tedious steps of image processing and analysis and delivers a science-ready image datacube. The included steps involve converting to common flux units, image registration to a common WCS, and convolution to a common resolution. Individual steps can be performed separately. We test the module using the dwarf galaxy NGC1569 by producing its observed spectral energy distribution on a pixel-by-pixel basis.

  12. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  13. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  14. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  15. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  16. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  17. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  18. WASTE PACKAGE DESIGN SENSITIVITY REPORT

    SciTech Connect

    P. Mecharet

    2001-03-09

    The purpose of this technical report is to present the current designs for waste packages and determine which designs will be evaluated for the Site Recommendation (SR) or Licence Application (LA), to demonstrate how the design will be shown to comply with the applicable design criteria. The evaluations to support SR or LA are based on system description document criteria. The objective is to determine those system description document criteria for which compliance is to be demonstrated for SR; and, having identified the criteria, to refer to the documents that show compliance. In addition, those system description document criteria for which compliance will be addressed for LA are identified, with a distinction made between two steps of the LA process: the LA-Construction Authorization (LA-CA) phase on one hand, and the LA-Receive and Possess (LA-R&P) phase on the other hand. The scope of this work encompasses the Waste Package Project disciplines for criticality, shielding, structural, and thermal analysis.

  19. Stress among package truck drivers.

    PubMed

    Orris, P; Hartman, D E; Strauss, P; Anderson, R J; Collins, J; Knopp, C; Xu, Y; Melius, J

    1997-02-01

    In 1992, a cross-sectional questionnaire study of package truck drivers in one company was conducted at four widely scattered sites throughout the US; 317 drivers participated, representing 82% of those eligible. The package truck drivers scored significantly above the US working population comparison norm on all summary and individual scales derived from the SCL 90-R, indicating a substantial increase in psychologic distress for this group. The Global Severity Index, the best single summary measure of psychological distress in the SCL 90-R, revealed a mean T score for the drivers of 64.20, 91st percentile of the normative population. The group perceived significantly more daily stressful events than the average working adult, and their sensitivity to these events was also increased. Role overload, a component of the Occupational Stress Inventory, was the most consistent factor associated with symptoms of psychological distress on multiple regression analysis. This study suggests that job stress is a psychological health hazard for these drivers.

  20. Apparatus and method for skin packaging articles

    NASA Technical Reports Server (NTRS)

    Madsen, B.; Pozsony, E. R.; Collin, E. E. (Inventor)

    1973-01-01

    A system for skin packaging articles including a loading zone for positioning articles to be packaged upon a substrate, a thermoplastic film heating and vacuum operated skin packaging zone for covering the articles with film laminated to the substrate and a slitting zone for separating and trimming the individual skin packaged articles. The articles are passed to the successive zones. The loading zone may be adapted for conveyorized instead of hand loading. In some cases, where only transverse cutting of the film web is necessary, it may be desirable to eliminate the slitting zone and remove the skin packaged article or articles directly from the packaging zone. A conveniently located operating panel contains controls for effecting automatic, semiautomatic or manual operation of the entire system of any portions in any manner desired.

  1. Packaging of electro-microfluidic devices

    DOEpatents

    Benavides, Gilbert L.; Galambos, Paul C.; Emerson, John A.; Peterson, Kenneth A.; Giunta, Rachel K.; Watson, Robert D.

    2002-01-01

    A new architecture for packaging surface micromachined electro-microfluidic devices is presented. This architecture relies on two scales of packaging to bring fluid to the device scale (picoliters) from the macro-scale (microliters). The architecture emulates and utilizes electronics packaging technology. The larger package consists of a circuit board with embedded fluidic channels and standard fluidic connectors (e.g. Fluidic Printed Wiring Board). The embedded channels connect to the smaller package, an Electro-Microfluidic Dual-Inline-Package (EMDIP) that takes fluid to the microfluidic integrated circuit (MIC). The fluidic connection is made to the back of the MIC through Bosch-etched holes that take fluid to surface micromachined channels on the front of the MIC. Electrical connection is made to bond pads on the front of the MIC.

  2. Packaging of electro-microfluidic devices

    DOEpatents

    Benavides, Gilbert L.; Galambos, Paul C.; Emerson, John A.; Peterson, Kenneth A.; Giunta, Rachel K.; Zamora, David Lee; Watson, Robert D.

    2003-04-15

    A new architecture for packaging surface micromachined electro-microfluidic devices is presented. This architecture relies on two scales of packaging to bring fluid to the device scale (picoliters) from the macro-scale (microliters). The architecture emulates and utilizes electronics packaging technology. The larger package consists of a circuit board with embedded fluidic channels and standard fluidic connectors (e.g. Fluidic Printed Wiring Board). The embedded channels connect to the smaller package, an Electro-Microfluidic Dual-Inline-Package (EMDIP) that takes fluid to the microfluidic integrated circuit (MIC). The fluidic connection is made to the back of the MIC through Bosch-etched holes that take fluid to surface micromachined channels on the front of the MIC. Electrical connection is made to bond pads on the front of the MIC.

  3. Lessons learned during Type A Packaging testing

    SciTech Connect

    O`Brien, J.H.; Kelly, D.L.

    1995-11-01

    For the past 6 years, the US Department of Energy (DOE) Office of Facility Safety Analysis (EH-32) has contracted Westinghouse Hanford Company (WHC) to conduct compliance testing on DOE Type A packagings. The packagings are tested for compliance with the U.S. Department of Transportation (DOT) Specification 7A, general packaging, Type A requirements. The DOE has shared the Type A packaging information throughout the nuclear materials transportation community. During testing, there have been recurring areas of packaging design that resulted in testing delays and/or initial failure. The lessons learned during the testing are considered a valuable resource. DOE requested that WHC share this resource. By sharing what is and can be encountered during packaging testing, individuals will hopefully avoid past mistakes.

  4. Type A radioactive liquid sample packaging family

    SciTech Connect

    Edwards, W.S.

    1995-11-01

    Westinghouse Hanford Company (WHC) has developed two packagings that can be used to ship Type A quantities of radioactive liquids. WHC designed these packagings to take advantage of commercially available items where feasible to reduce the overall packaging cost. The Hedgehog packaging can ship up to one liter of Type A radioactive liquid with no shielding and 15 cm of distance between the liquid and the package exterior, or 30 ml of liquid with 3.8 cm of stainless steel shielding and 19 cm of distance between the liquid and the package exterior. The One Liter Shipper can ship up to one liter of Type A radioactive liquid that does not require shielding.

  5. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  6. Multigrid on massively parallel architectures

    SciTech Connect

    Falgout, R D; Jones, J E

    1999-09-17

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs.

  7. FDCHQHP: A Fortran package for heavy quarkonium hadroproduction

    NASA Astrophysics Data System (ADS)

    Wan, Lu-Ping; Wang, Jian-Xiong

    2014-11-01

    FDCHQHP is a Fortran package to calculate the transverse momentum (pt) distribution of yield and polarization for heavy quarkonium hadroproduction at next-to-leading-order (NLO) based on non-relativistic QCD(NRQCD) framework. It contains the complete color-singlet and color-octet intermediate states in present theoretical level, and is available to calculate different polarization parameters in different frames. As the LHC running now and in the future, it supplies a very useful tool to obtain theoretical prediction on the heavy quarkonium hadroproduction. Catalogue identifier: AETT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETT_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12020165 No. of bytes in distributed program, including test data, etc.: 103178384 Distribution format: tar.gz Programming language: Fortran 77. Computer: Any computer with Linux operating system, Intel Fortran Compiler and MPI library. Operating system: Linux. Has the code been vectorized or parallelized?: Parallelized with MPI. Classification: 11.1. External routines: MPI Library Nature of problem: This package is for the calculation of the heavy quarkonium hadroproduction at NRQCD NLO. Solution method: The Fortran codes of this package are generated by the FDC system [1] automatically. Additional comments: It is better to run the package on supercomputers or multi-core computers. !!!!! The distribution file for this program is over 100 MB and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: For an independent sub-process, it may take several seconds to several hours depending on the number of sample points if one CPU core is used. For a complete prompt

  8. Light Barrier for Non-Foil Packaging

    DTIC Science & Technology

    2010-12-16

    contain more oxidizable substrate (Hong et al 1995ab). Shredded and sliced cheese products have a larger surface area available for light exposure, and...exposing olive oil and yoghurt packaged in them to light abuse and measuring photodegradation products . 5. To produce the optimum material from No 2...accelerated shelf life testing and taste panel evaluations will compare the food products packaged in No. 5 to identical products packaged in control

  9. The challenges of packaging combination devices.

    PubMed

    Mankel, George

    2008-01-01

    This article focuses on the development of a packaging format for drug eluting stents where the package not only has to meet the needs of the stent, but also the needs of the drug incorporated into its polymer coating. The package has to allow the transfer of ethylene oxide gas for sterilisation, but when in storage, must provide a barrier to keep out moisture and oxygen. A pouch and commercial scale manufacturing process were developed to incorporate this dual function into one item.

  10. NFR TRIGA package design review report

    SciTech Connect

    Clements, M.D.

    1994-08-26

    The purpose of this document is to compile, present and document the formal design review of the NRF TRIGA packaging. The contents of this document include: the briefing meeting presentations, package description, design calculations, package review drawings, meeting minutes, action item lists, review comment records, final resolutions, and released drawings. This design review required more than two meeting to resolve comments. Therefore, there are three meeting minutes and two action item lists.

  11. Design considerations for automated packaging operations

    SciTech Connect

    Fahrenholtz, J.; Jones, J.; Kincy, M.

    1993-12-31

    The paper is based on work performed at Sandia National Laboratories to automate DOE packaging operations. It is a general summary of work from several projects which may be applicable to other packaging operations. Examples are provided of robotic operations which have been demonstrated as well as operations that are currently being developed. General design considerations for packages and for automated handling systems are described.

  12. Materials in electronic packaging at APL

    NASA Astrophysics Data System (ADS)

    Charles, Harry K., Jr.

    1993-03-01

    The use of modern electronic packaging materials is examined with reference to general classes of materials, such as epoxies, silicones, metals, alloys, and ceramics. Specific examples are presented to illustrate how the proper choice of materials has enhanced circuit performance and long-term reliability. Applications discussed include single-chip and multichip packaging, board and substrate structures, die (substrate) attachment and electrical interconnection, circuit and board passivation, and encapsulation or package sealing.

  13. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  14. Portable, parallel, reusable Krylov space codes

    SciTech Connect

    Smith, B.; Gropp, W.

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  15. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  16. Microbial control by packaging: a review.

    PubMed

    Cutter, Catherine Nettles

    2002-03-01

    Since early man first used a variety of natural containers to store and eat foods, significant developments in food packaging materials have provided the means to suppress microbial growth as well as protect foods from external microbial contamination. Throughout this progression, packaging materials have been developed specifically to prevent the deterioration of foods resulting from exposure to air, moisture, or pH changes associated with the food or the surrounding atmosphere. Both flexible and rigid packaging materials, alone or in combination with other preservation methods, have been developed to offer the necessary barrier, inactivation, and containment properties required for successful food packaging. Examples of flexible packaging used to inactivate microorganisms associated with foods include controlled atmosphere, vacuum, modified atmosphere, active, and edible packaging. Additionally, the combination of rigid packaging materials made from metal, glass, or plastic with heat provides the most effective and widely used method for inactivating microorganisms. As with all food products, it is necessary to integrate a HACCP-based program to assure quality throughout the packaging operation. In addition to packaging improvements, other novel technologies include the development of detectors for oxygen levels, bacterial toxins, and microbial growth, or the integration of time-temperature indicators for detection of improper handling or storage.

  17. Polymer Composites for Intelligent Food Packaging

    NASA Astrophysics Data System (ADS)

    He, Jiating; Yap, Ray Chin Chong; Wong, Siew Yee; Li, Xu

    2015-09-01

    Over the last 50 years, remarkable improvements in mechanical and barrier properties of polymer composites have been realized. Their improved properties have been widely studied and employed for food packaging to keep food fresh, clean and suitable for consumption over sufficiently long storage period. In this paper, the current progress of science and technology development of polymer composites for intelligent food packaging will be highlighted. Future directions and perspectives for exploring polymer composites for intelligent food packaging to reveal freshness and quality of food packaged will also be put forward.

  18. Hermetic Packages For Millimeter-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Herman, Martin I.; Lee, Karen A.; Lowry, Lynn E.; Carpenter, Alain; Wamhof, Paul

    1994-01-01

    Advanced hermetic packages developed to house electronic circuits operating at frequencies from 1 to 100 gigahertz and beyond. Signals coupled into and out of packages electromagnetically. Provides circuit packages small, lightweight, rugged, and inexpensive in mass production. Packages embedded in planar microstrip and coplanar waveguide circuits, in waveguide-to-planar and planar-to-waveguide circuitry, in waveguide-to-waveguide circuitry, between radiating (antenna) elements, and between planar transmission lines and radiating elements. Other applications in automotive, communication, radar, remote sensing, and biomedical electronic systems foreseen.

  19. Hermetic packaging for microwave modules. Final report

    SciTech Connect

    Hollar, D.L.

    1996-10-01

    Microwave assemblies, such as radar modules, require hermetically sealed packaging. Since most of these assemblies are used for airborne applications, the packages must be lightweight. The aluminum alloy A-40 provides the needed characteristics of these applications. This project developed packaging techniques using the A-40 alloy as a housing material and laser welding processes to install connectors, purge tube, and covers on the housings. The completed package successfully passed the hermetic leak requirements and environmental testing. Optimum laser welding parameters were established in addition to all of the related tooling for assembly.

  20. Yucca Mountain Waste Package Closure System

    SciTech Connect

    Herschel Smartt; Arthur Watkins; David Pace; Rodney Bitsoi; Eric Larsen; Timothy McJunkin; Charles Tolle

    2006-04-01

    The current disposal path for high-level waste is to place the material into secure waste packages that are inserted into a repository. The Idaho National Laboratory has been tasked with the development, design, and demonstration of the waste package closure system for the repository project. The closure system design includes welding three lids and a purge port cap, four methods of nondestructive examination, and evacuation and backfill of the waste package, all performed in a remote environment. A demonstration of the closure system will be performed with a full-scale waste package.

  1. Yucca Mountain Waste Package Closure System

    SciTech Connect

    shelton-davis; Colleen Shelton-Davis; Greg Housley

    2005-10-01

    The current disposal path for high-level waste is to place the material into secure waste packages that are inserted into a repository. The Idaho National Laboratory has been tasked with the development, design, and demonstration of the waste package closure system for the repository project. The closure system design includes welding three lids and a purge port cap, four methods of nondestructive examination, and evacuation and backfill of the waste package, all performed in a remote environment. A demonstration of the closure system will be performed with a full-scale waste package.

  2. Microelectronic device package with an integral window

    DOEpatents

    Peterson, Kenneth A.; Watson, Robert D.

    2002-01-01

    An apparatus for packaging of microelectronic devices, including an integral window. The microelectronic device can be a semiconductor chip, a CCD chip, a CMOS chip, a VCSEL chip, a laser diode, a MEMS device, or a IMEMS device. The package can include a cofired ceramic frame or body. The package can have an internal stepped structure made of one or more plates, with apertures, which are patterned with metallized conductive circuit traces. The microelectronic device can be flip-chip bonded on the plate to these traces, and oriented so that the light-sensitive side is optically accessible through the window. A cover lid can be attached to the opposite side of the package. The result is a compact, low-profile package, having an integral window that can be hermetically-sealed. The package body can be formed by low-temperature cofired ceramic (LTCC) or high-temperature cofired ceramic (HTCC) multilayer processes with the window being simultaneously joined (e.g. cofired) to the package body during LTCC or HTCC processing. Multiple chips can be located within a single package. The cover lid can include a window. The apparatus is particularly suited for packaging of MEMS devices, since the number of handling steps is greatly reduced, thereby reducing the potential for contamination.

  3. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  4. Packaging and Embedded Electronics for the Next Generation

    NASA Technical Reports Server (NTRS)

    Sampson, Michael J.

    2010-01-01

    This viewgraph presentation describes examples of electronic packaging that protects an electronic element from handling, contamination, shock, vibration and light penetration. The use of Hermetic and non-hermetic packaging is also discussed. The topics include: 1) What is Electronic Packaging? 2) Why Package Electronic Parts? 3) Evolution of Packaging; 4) General Packaging Discussion; 5) Advanced non-hermetic packages; 6) Discussion of Hermeticity; 7) The Class Y Concept and Possible Extensions; 8) Embedded Technologies; and 9) NEPP Activities.

  5. 78 FR 48903 - Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-12

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof... & Spirits Group of Cognac, France (``Camus''). Camus, Sidney Frank, and L'Oreal have since been...

  6. 49 CFR 173.24 - General requirements for packagings and packages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS SHIPPERS-GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for.... (b) Each package used for the shipment of hazardous materials under this subchapter shall be...

  7. 49 CFR 173.24 - General requirements for packagings and packages.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS SHIPPERS-GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for.... (b) Each package used for the shipment of hazardous materials under this subchapter shall be...

  8. PRIDE Surveillance Projects Data Packaging Project Information Package Specification Version 1.1

    SciTech Connect

    Kelleher, D. M.; Shipp, R. L.; Mason, J. D.

    2010-08-31

    Information Package Specification version 1.1 describes an XML document format called an information package that can be used to store information in information management systems and other information archives. An information package consists of package information, the context required to understand and use that information, package metadata that describes the information, and XML signatures that protect the information. The information package described in this specification was designed to store Department of Energy (DOE) and National Nuclear Security Administration (NNSA) information and includes the metadata required for that information: a unique package identifier, information marking that conforms to DOE and NNSA requirements, and access control metadata. It is an implementation of the Open Archival Information System (OAIS) Reference Model archival information package tailored to meet NNSA information storage requirements and designed to be used in the computing environments at the Y-12 National Security Complex and at other NNSA sites.

  9. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  10. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  11. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  12. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  13. Parallel programming of industrial applications

    SciTech Connect

    Heroux, M; Koniges, A; Simon, H

    1998-07-21

    In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from these applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).

  14. Distinguishing serial and parallel parsing.

    PubMed

    Gibson, E; Pearlmutter, N J

    2000-03-01

    This paper discusses ways of determining whether the human parser is serial maintaining at most, one structural interpretation at each parse state, or whether it is parallel, maintaining more than one structural interpretation in at least some circumstances. We make four points. The first two counterclaims made by Lewis (2000): (1) that the availability of alternative structures should not vary as a function of the disambiguating material in some ranked parallel models; and (2) that parallel models predict a slow down during the ambiguous region for more syntactically ambiguous structures. Our other points concern potential methods for seeking experimental evidence relevant to the serial/parallel question. We discuss effects of the plausibility of a secondary structure in the ambiguous region (Pearlmutter & Mendelsohn, 1999) and suggest examining the distribution of reaction times in the disambiguating region.

  15. Review and analysis of dense linear system solver package for distributed memory machines

    NASA Technical Reports Server (NTRS)

    Narang, H. N.

    1993-01-01

    A dense linear system solver package recently developed at the University of Texas at Austin for distributed memory machine (e.g. Intel Paragon) has been reviewed and analyzed. The package contains about 45 software routines, some written in FORTRAN, and some in C-language, and forms the basis for parallel/distributed solutions of systems of linear equations encountered in many problems of scientific and engineering nature. The package, being studied by the Computer Applications Branch of the Analysis and Computation Division, may provide a significant computational resource for NASA scientists and engineers in parallel/distributed computing. Since the package is new and not well tested or documented, many of its underlying concepts and implementations were unclear; our task was to review, analyze, and critique the package as a step in the process that will enable scientists and engineers to apply it to the solution of their problems. All routines in the package were reviewed and analyzed. Underlying theory or concepts which exist in the form of published papers or technical reports, or memos, were either obtained from the author, or from the scientific literature; and general algorithms, explanations, examples, and critiques have been provided to explain the workings of these programs. Wherever the things were still unclear, communications were made with the developer (author), either by telephone or by electronic mail, to understand the workings of the routines. Whenever possible, tests were made to verify the concepts and logic employed in their implementations. A detailed report is being separately documented to explain the workings of these routines.

  16. Packaging performance evaluation and performance oriented packaging standards for large packages for poison inhalation hazard materials

    SciTech Connect

    Griego, N.R.; Mills, G.S.; McClure, J.D.

    1997-07-01

    The U.S. Department of Transportation Research & Special Programs Administration (DOT-RSPA) has sponsored a project at Sandia National Laboratories to evaluate the protection provided by current packagings used for truck and rail transport of materials that have been classified as Poison Inhalation Hazards (PIH) and to recommend performance standards for these PIH packagings. Hazardous materials span a wide range of toxicity and there are many parameters used to characterize toxicity; for any given hazardous material, data are not available for all of the possible toxicity parameters. Therefore, it was necessary to select a toxicity criterion to characterize all of the PIH compounds (a value of the criterion was derived from other parameters in many cases) and to calculate their dispersion in the event of a release resulting from a transportation accident. Methodologies which account for material toxicity and dispersal characteristics were developed as a major portion of this project and applied to 72 PIH materials. This report presents details of the PIH material toxicity comparisons, calculation of their dispersion, and their classification into five severity categories. 16 refs., 5 figs., 7 tabs.

  17. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  18. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  19. Debugging in a parallel environment

    SciTech Connect

    Wasserman, H.J.; Griffin, J.H.

    1985-01-01

    This paper describes the preliminary results of a project investigating approaches to dynamic debugging in parallel processing systems. Debugging programs in a multiprocessing environment is particularly difficult because of potential errors in synchronization of tasks, data dependencies, sharing of data among tasks, and irreproducibility of specific machine instruction sequences from one job to the next. The basic methodology involved in predicate-based debuggers is given as well as other desirable features of dynamic parallel debugging. 13 refs.

  20. Xyce Parallel Electronic Simulator : users' guide, version 2.0.

    SciTech Connect

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont; Fixel, Deborah A.; Russo, Thomas V.; Keiter, Eric Richard; Hutchinson, Scott Alan; Pawlowski, Roger Patrick; Wix, Steven D.

    2004-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce