Science.gov

Sample records for parallel pcg package

  1. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    SciTech Connect

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  2. PCG: A software package for the iterative solution of linear systems on scalar, vector and parallel computers

    SciTech Connect

    Joubert, W.; Carey, G.F.

    1994-12-31

    A great need exists for high performance numerical software libraries transportable across parallel machines. This talk concerns the PCG package, which solves systems of linear equations by iterative methods on parallel computers. The features of the package are discussed, as well as techniques used to obtain high performance as well as transportability across architectures. Representative numerical results are presented for several machines including the Connection Machine CM-5, Intel Paragon and Cray T3D parallel computers.

  3. PCG reference manual: A package for the iterative solution of large sparse linear systems on parallel computers. Version 1.0

    SciTech Connect

    Joubert, W.D.; Carey, G.F.; Kohli, H.; Lorber, A.; McLay, R.T.; Shen, Y.; Berner, N.A. |; Kalhan, A. |

    1995-01-01

    PCG (Preconditioned Conjugate Gradient package) is a system for solving linear equations of the form Au = b, for A a given matrix and b and u vectors. PCG, employing various gradient-type iterative methods coupled with preconditioners, is designed for general linear systems, with emphasis on sparse systems such as these arising from discretization of partial differential equations arising from physical applications. It can be used to solve linear equations efficiently on parallel computer architectures. Much of the code is reusable across architectures and the package is portable across different systems; the machines that are currently supported is listed. This manual is intended to be the general-purpose reference describing all features of the package accessible to the user; suggestions are also given regarding which methods to use for a given problem.

  4. A parallel PCG solver for MODFLOW.

    PubMed

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. PMID:19563427

  5. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  6. Hybrid Optimization Parallel Search PACKage

    Energy Science and Technology Software Center (ESTSC)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  7. Parallel Climate Data Assimilation PSAS Package

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Chan, Clara; Gennery, Donald B.; Ferraro, Robert D.

    1996-01-01

    We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512node Intel Paragon. The equation solver achieves a sustained 18 Gflops performance. As the results, we achieved an unprecedented 100-fold solution time reduction on the Intel Paragon parallel platform over the Cray C90. This not only meets and exceeds the DAO time requirements, but also significantly enlarges the window of exploration in climate data assimilations.

  8. On the performance of a simple parallel implementation of the ILU-PCG for the Poisson equation on irregular domains

    NASA Astrophysics Data System (ADS)

    Gibou, Frédéric; Min, Chohong

    2012-05-01

    We report on the performance of a parallel algorithm for solving the Poisson equation on irregular domains. We use the spatial discretization of Gibou et al. (2002) [6] for the Poisson equation with Dirichlet boundary conditions, while we use a finite volume discretization for imposing Neumann boundary conditions (Ng et al., 2009; Purvis and Burkhalter, 1979) [8,10]. The parallelization algorithm is based on the Cuthill-McKee ordering. Its implementation is straightforward, especially in the case of shared memory machines, and produces significant speedup; about three times on a standard quad core desktop computer and about seven times on a octa core shared memory cluster. The implementation code is posted on the authors' web pages for reference.

  9. High density packaging and interconnect of massively parallel image processors

    NASA Technical Reports Server (NTRS)

    Carson, John C.; Indin, Ronald J.

    1991-01-01

    This paper presents conceptual designs for high density packaging of parallel processing systems. The systems fall into two categories: global memory systems where many processors are packaged into a stack, and distributed memory systems where a single processor and many memory chips are packaged into a stack. Thermal behavior and performance are discussed.

  10. AZTEC: A parallel iterative package for the solving linear systems

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  11. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  12. Merging parallel optics packaging and surface mount technologies

    NASA Astrophysics Data System (ADS)

    Kopp, Christophe; Volpert, Marion; Routin, Julien; Bernabé, Stéphane; Rossat, Cyrille; Tournaire, Myriam; Hamelin, Régis

    2008-02-01

    Optical links are well known to present significant advantages over electrical links for very high-speed data rate at 10Gpbs and above per channel. However, the transition towards optical interconnects solutions for short and very short reach applications requires the development of innovative packaging solutions that would deal with very high volume production capability and very low cost per unit. Moreover, the optoelectronic transceiver components must be able to move from the edge to anywhere on the printed circuit board, for instance close to integrated circuits with high speed IO. In this paper, we present an original packaging design to manufacture parallel optic transceivers that are surface mount devices. The package combines highly integrated Multi-Chip-Module on glass and usual IC ceramics packaging. The use of ceramic and the development of sealing technologies achieve hermetic requirements. Moreover, thanks to a chip scale package approach the final device exhibits a much minimized footprint. One of the main advantages of the package is its flexibility to be soldered or plugged anywhere on the printed circuit board as any other electronic device. As a demonstrator we present a 2 by 4 10Gbps transceiver operating at 850nm.

  13. JPARSS: A Java Parallel Network Package for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

  14. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  15. (PCG) Protein Crystal Growth Canavalin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Canavalin. The major storage protein of leguminous plants and a major source of dietary protein for humans and domestic animals. It is studied in efforts to enhance nutritional value of proteins through protein engineerings. It is isolated from Jack Bean because of it's potential as a nutritional substance. Principal Investigator on STS-26 was Alex McPherson.

  16. (PCG) Protein Crystal Growth on STS-26

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Mission Specialist George (Pinky) D. Nelson uses a 35 mm camera to photograph a protein crystal grown during the STS-26 Protein Crystal Growth (PCG-II-01) experiment. The protein crystal growth (PCG) carrier is shown deployed from the PCG Refrigerator/Incubator Mocule (R/IM) located in the middeck forward locker. The R/IM contained three Vapor Diffusion Apparatus (VDS) trays (one of which is shown). A total of sixty protein crystal samples were processed during the STS-26 mission.

  17. (PCG) Protein Crystal Growth Porcine Elastase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Porcine Elastase. This enzyme is associated with the degradation of lung tissue in people suffering from emphysema. It is useful in studying causes of this disease. Principal Investigator on STS-26 was Charles Bugg.

  18. penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE

    SciTech Connect

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  19. (PCG) Protein Crystal Growth Isocitrate Lyase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Isocitrate Lyase. Target enzyme for fungicides. A better understanding of this enzyme should lead to the discovery of more potent fungicides to treat serious crop diseases such as rice blast. It regulates the flow of metabolic intermediates required for cell growth. Principal Investigator for STS-26 was Charles Bugg.

  20. (PCG) Protein Crystal Growth Isocitrate Lysase

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Isocitrate Lysase. Target enzyme for fungicides. A better understanding of this enzyme should lead to the discovery of more potent fungicides to treat serious crop diseases such as rice blast. It regulates the flow of metabolic intermediates required for cell growth. Principal Investigator on STS-26 was Charles Bugg.

  1. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  2. Optimization of a parallel permutation testing function for the SPRINT R package

    PubMed Central

    Petrou, Savvas; Sloan, Terence M; Mewissen, Muriel; Forster, Thorsten; Piotrowski, Michal; Dobrzelecki, Bartosz; Ghazal, Peter; Trew, Arthur; Hill, Jon

    2011-01-01

    The statistical language R and its Bioconductor package are favoured by many biostatisticians for processing microarray data. The amount of data produced by some analyses has reached the limits of many common bioinformatics computing infrastructures. High Performance Computing systems offer a solution to this issue. The Simple Parallel R Interface (SPRINT) is a package that provides biostatisticians with easy access to High Performance Computing systems and allows the addition of parallelized functions to R. Previous work has established that the SPRINT implementation of an R permutation testing function has close to optimal scaling on up to 512 processors on a supercomputer. Access to supercomputers, however, is not always possible, and so the work presented here compares the performance of the SPRINT implementation on a supercomputer with benchmarks on a range of platforms including cloud resources and a common desktop machine with multiprocessing capabilities. Copyright © 2011 John Wiley & Sons, Ltd. PMID:23335858

  3. Optimization of a parallel permutation testing function for the SPRINT R package.

    PubMed

    Petrou, Savvas; Sloan, Terence M; Mewissen, Muriel; Forster, Thorsten; Piotrowski, Michal; Dobrzelecki, Bartosz; Ghazal, Peter; Trew, Arthur; Hill, Jon

    2011-12-10

    The statistical language R and its Bioconductor package are favoured by many biostatisticians for processing microarray data. The amount of data produced by some analyses has reached the limits of many common bioinformatics computing infrastructures. High Performance Computing systems offer a solution to this issue. The Simple Parallel R Interface (SPRINT) is a package that provides biostatisticians with easy access to High Performance Computing systems and allows the addition of parallelized functions to R. Previous work has established that the SPRINT implementation of an R permutation testing function has close to optimal scaling on up to 512 processors on a supercomputer. Access to supercomputers, however, is not always possible, and so the work presented here compares the performance of the SPRINT implementation on a supercomputer with benchmarks on a range of platforms including cloud resources and a common desktop machine with multiprocessing capabilities. Copyright © 2011 John Wiley & Sons, Ltd. PMID:23335858

  4. (PCG) Protein Crystal Growth Gamma-Interferon

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Gamma-Interferon. Stimulates the body's immune system and is used clinically in the treatment of cancer. Potential as an anti-tumor agent against solid tumors as well as leukemia's and lymphomas. It has additional utility as an anti-ineffective agent, including antiviral, anti-bacterial, and anti-parasitic activities. Principal Investigator on STS-26 was Charles Bugg.

  5. (PCG) Protein Crystal Growth Human Serum Albumin

    NASA Technical Reports Server (NTRS)

    1989-01-01

    (PCG) Protein Crystal Growth Human Serum Albumin. Contributes to many transport and regulatory processes and has multifunctional binding properties which range from various metals, to fatty acids, hormones, and a wide spectrum of therapeutic drugs. The most abundant protein of the circulatory system. It binds and transports an incredible variety of biological and pharmaceutical ligands throughout the blood stream. Principal Investigator on STS-26 was Larry DeLucas.

  6. Low-cost package of 30Gbps pluggable parallel optical transmitter module

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ming; Cheng, Yao-Ling; Chen, Ying-Chin; Wu, Cherng-Shiun; Chu, Mu-Tao

    2005-01-01

    This paper describes a 12-channels parallel optical transmitter module with a MPO-Connector designed for a very short reach OC-192 and SNAP 12 specifications. It is important to design the micro array lens for better couple efficiency of array optical transmitter module. The authors design the high accuracy micro array lens for injection modeling to reduce the price and suit for further mass production. In this 12-channel parallel optical transmitter module, it is very difficult to posit the chip on correct position according to guide pin or guide pin hole. Therefore, the authors develop the method of two steps flip chip bonding to release the difficulty of chip alignment on ceramic substrate without two guide pin holes. The performance of the module is demonstrated to fulfill the requirements of SNAP 12[1]. The extinction ratio of the 12-channels array transmitter module is tested above 6dB, respectively. The optical shift by heat is an important factor affecting the performance of the array module. Thermal analysis of 12-channel parallel optical transmitter module is used to improve the effect of optical shift by heat in this paper. And the temperature among the case of transmitter module is greatly reduced from 52.7 degree to 31.9 degree. In this paper, a 12-channel array transmitter module package and thermal simulation are discussed and tested. This is a low cost package design and is suitable for mass production.

  7. Induction signatures at 67P/CG

    NASA Astrophysics Data System (ADS)

    Constantinescu, Dragos; Heinisch, Philip; Auster, Uli; Richter, Ingo; Przyklenk, Anita; Glassmeier, Karl-Heinz

    2016-04-01

    The Philae landing on the nucleus of Churiomov-Gerasimenko (67P/CG) opens up the opportunity to derive the electrical properties of the comet nucleus by taking advantage of simultaneous measurements done by Philae on the surface and by Rosetta away from the nucleus. This allows the separation of the induced part of the electromagnetic field, which carries information about the electrical conductivity distribution inside the cometary nucleus. Using the transfer function and the phase difference between the magnetic field at the nucleus surface and the magnetic field measured on orbit, we give a lower bound estimate for the mean electrical conductivity of the Churiumov-Gerasimenko nucleus.

  8. 3-D readout-electronics packaging for high-bandwidth massively paralleled imager

    DOEpatents

    Kwiatkowski, Kris; Lyke, James

    2007-12-18

    Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.

  9. Parallel distributed free-space optoelectronic computer engine using flat plug-on-top optics package

    NASA Astrophysics Data System (ADS)

    Berger, Christoph; Ekman, Jeremy T.; Wang, Xiaoqing; Marchand, Philippe J.; Spaanenburg, Henk; Kiamilev, Fouad E.; Esener, Sadik C.

    2000-05-01

    We report about ongoing work on a free-space optical interconnect system, which will demonstrate a Fast Fourier Transformation calculation, distributed among six processor chips. Logically, the processors are arranged in two linear chains, where each element communicates optically with its nearest neighbors. Physically, the setup consists of a large motherboard, several multi-chip carrier modules, which hold the processor/driver chips and the optoelectronic chips (arrays of lasers and detectors), and several plug-on-top optics modules, which provide the optical links between the chip carrier modules. The system design tries to satisfy numerous constraints, such as compact size, potential for mass-production, suitability for large arrays (up to 1024 parallel channels), compatibility with standard electronics fabrication and packaging technology, potential for active misalignment compensation by integration MEMS technology, and suitability for testing different imaging topologies. We present the system architecture together with details of key components and modules, and report on first experiences with prototype modules of the setup.

  10. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  11. Improving the performance of cardiac abnormality detection from PCG signal

    NASA Astrophysics Data System (ADS)

    Sujit, N. R.; Kumar, C. Santhosh; Rajesh, C. B.

    2016-03-01

    The Phonocardiogram (PCG) signal contains important information about the condition of heart. Using PCG signal analysis prior recognition of coronary illness can be done. In this work, we developed a biomedical system for the detection of abnormality in heart and methods to enhance the performance of the system using SMOTE and AdaBoost technique have been presented. Time and frequency domain features extracted from the PCG signal is input to the system. The back-end classifier to the system developed is Decision Tree using CART (Classification and Regression Tree), with an overall classification accuracy of 78.33% and sensitivity (alarm accuracy) of 40%. Here sensitivity implies the precision obtained from classifying the abnormal heart sound, which is an essential parameter for a system. We further improve the performance of baseline system using SMOTE and AdaBoost algorithm. The proposed approach outperforms the baseline system by an absolute improvement in overall accuracy of 5% and sensitivity of 44.92%.

  12. A Note on the Relationship Between Adaptive AMG and PCG

    SciTech Connect

    Falgout, R D

    2004-08-06

    In this note, we will show that preconditioned conjugate gradients (PCG) can be viewed as a particular adaptive algebraic multi-grid algorithm (adaptive AMG). The relationship between these two methods provides important insight into the construction of effective adaptive AMG algorithms.

  13. Polycomb Group (PcG) Proteins and Human Cancers: Multifaceted Functions and Therapeutic Implications

    PubMed Central

    Wang, Wei; Qin, Jiang-Jiang; Voruganti, Sukesh; Nag, Subhasree; Zhou, Jianwei; Zhang, Ruiwen

    2016-01-01

    Polycomb group (PcG) proteins are transcriptional repressors that regulate several crucial developmental and physiological processes in the cell. More recently, they have been found to play important roles in human carcinogenesis and cancer development and progression. The deregulation and dysfunction of PcG proteins often lead to blocking or inappropriate activation of developmental pathways, enhancing cellular proliferation, inhibiting apoptosis, and increasing the cancer stem cell population. Genetic and molecular investigations of PcG proteins have long been focused on their PcG functions. However, PcG proteins have recently been shown to exert non-polycomb functions, contributing to the regulation of diverse cellular functions. We and others have demonstrated that PcG proteins regulate the expression and function of several oncogenes and tumor suppressor genes in a PcG-independent manner, and PcG proteins are associated with the survival of patients with cancer. In this review, we summarize the recent advances in the research on PcG proteins, including both the polycomb-repressive and non-polycomb functions. We specifically focus on the mechanisms by which PcG proteins play roles in cancer initiation, development, and progression. Finally, we discuss the potential value of PcG proteins as molecular biomarkers for the diagnosis and prognosis of cancer, and as molecular targets for cancer therapy. PMID:26227500

  14. Protein Crystal Growth (PCG) experiment aboard mission STS-66

    NASA Technical Reports Server (NTRS)

    2000-01-01

    On the Space Shuttle Orbiter Atlantis' middeck, Astronaut Joseph R. Tarner, mission specialist, works at an area amidst several lockers which support the Protein Crystal Growth (PCG) experiment during the STS-66 mission. This particular section is called the Crystal Observation System, housed in the Thermal Enclosure System (COS/TES). Together with the Vapor Diffusion Apparatus (VDA), housed in Single Locker Thermal Enclosure (SLTES), the COS/TES represents the continuing research into the structure of proteins and other macromolecules such as viruses.

  15. Lamin A/C sustains PcG protein architecture, maintaining transcriptional repression at target genes

    PubMed Central

    Cesarini, Elisa; Mozzetta, Chiara; Marullo, Fabrizia; Gregoretti, Francesco; Gargiulo, Annagiusi; Columbaro, Marta; Cortesi, Alice; Antonelli, Laura; Di Pelino, Simona; Squarzoni, Stefano; Palacios, Daniela; Zippo, Alessio; Bodega, Beatrice; Oliva, Gennaro

    2015-01-01

    Beyond its role in providing structure to the nuclear envelope, lamin A/C is involved in transcriptional regulation. However, its cross talk with epigenetic factors—and how this cross talk influences physiological processes—is still unexplored. Key epigenetic regulators of development and differentiation are the Polycomb group (PcG) of proteins, organized in the nucleus as microscopically visible foci. Here, we show that lamin A/C is evolutionarily required for correct PcG protein nuclear compartmentalization. Confocal microscopy supported by new algorithms for image analysis reveals that lamin A/C knock-down leads to PcG protein foci disassembly and PcG protein dispersion. This causes detachment from chromatin and defects in PcG protein–mediated higher-order structures, thereby leading to impaired PcG protein repressive functions. Using myogenic differentiation as a model, we found that reduced levels of lamin A/C at the onset of differentiation led to an anticipation of the myogenic program because of an alteration of PcG protein–mediated transcriptional repression. Collectively, our results indicate that lamin A/C can modulate transcription through the regulation of PcG protein epigenetic factors. PMID:26553927

  16. Transcription factor YY1 functions as a PcG protein in vivo.

    PubMed

    Atchison, Lakshmi; Ghias, Ayesha; Wilkinson, Frank; Bonini, Nancy; Atchison, Michael L

    2003-03-17

    Polycomb group (PcG) proteins function as high molecular weight complexes that maintain transcriptional repression patterns during embryogenesis. The vertebrate DNA binding protein and transcriptional repressor, YY1, shows sequence homology with the Drosophila PcG protein, pleiohomeotic (PHO). YY1 might therefore be a vertebrate PcG protein. We used Drosophila embryo and larval/imaginal disc transcriptional repression systems to determine whether YY1 repressed transcription in a manner consistent with PcG function in vivo. YY1 repressed transcription in Drosophila, and this repression was stable on a PcG-responsive promoter, but not on a PcG-non-responsive promoter. PcG mutants ablated YY1 repression, and YY1 could substitute for PHO in repressing transcription in wing imaginal discs. YY1 functionally compensated for loss of PHO in pho mutant flies and partially corrected mutant phenotypes. Taken together, these results indicate that YY1 functions as a PcG protein. Finally, we found that YY1, as well as Polycomb, required the co-repressor protein CtBP for repression in vivo. These results provide a mechanism for recruitment of vertebrate PcG complexes to DNA and demonstrate new functions for YY1. PMID:12628927

  17. Exclusion of primary congenital glaucoma (PCG) from two candidate regions of chromosomes 1 and 6

    SciTech Connect

    Sarfarazi, M.; Akarsu, A.N.; Barsoum-Homsy, M.

    1994-09-01

    PCG is a genetically heterogeneous condition in which a significant proportion of families inherit in an autosomally recessive fashion. Although association of PCG with chromosomal abnormalities has been repeatedly reported in the literature, the chromosomal location of this condition is still unknown. Therefore, this study is designed to identify the chromosomal location of the PCG locus by positional mapping. We have identified 80 PCG families with a total of 261 potential informative meiosis. A group of 19 pedigrees with a minimum of 2 affected children in each pedigree and consanguinity in most of the parental generation were selected as our initial screening panel. This panel consists of a total of 44 affected and 93 unaffected individuals giving a total of 99 informative meiosis, including 5 phase-known. We used polymerase chain reaction (PCR), denaturing polyacrylamide gels and silver staining to genotype our families. We first screened for markers on 1q21-q31, the reported location for juvenile primary open-angle glaucoma and excluded a region of 30 cM as the likely site for the PCG locus. Association of PCG with both ring chromosome 6 and HLA-B8 has also been reported. Therefore, we genotyped our PCG panel with PCR applicable markers from 6p21. Significant negative lod scores were obtained for D6S105 (Z = -18.70) and D6S306 (Z = -5.99) at {theta}=0.001. HLA class 1 region has also contained one of the tubulin genes (TUBB) which is an obvious candidate for PCG. Study of this gene revealed a significant negative lod score with PCG (Z = -16.74, {theta}=0.001). A multipoint linkage analysis of markers in this and other regions containing the candidate genes will be presented.

  18. Plots, Calculations and Graphics Tools (PCG2). Software Transfer Request Presentation

    NASA Technical Reports Server (NTRS)

    Richardson, Marilou R.

    2010-01-01

    This slide presentation reviews the development of the Plots, Calculations and Graphics Tools (PCG2) system. PCG2 is an easy to use tool that provides a single user interface to view data in a pictorial, tabular or graphical format. It allows the user to view the same display and data in the Control Room, engineering office area, or remote sites. PCG2 supports extensive and regular engineering needs that are both planned and unplanned and it supports the ability to compare, contrast and perform ad hoc data mining over the entire domain of a program's test data.

  19. Isolation and characterization of hypertoxinogenic (htx) mutants of Escherichia coli KL320(pCG86).

    PubMed Central

    Bramucci, M G; Twiddy, E M; Baine, W B; Holmes, R K

    1981-01-01

    The structural genes for heat-labile enterotoxin (LT) are present on plasmid pCG86. Escherichia coli KL320(pCG86), LT was found to be cell associated. LT was present as a soluble protein in sonic lysates of KL320(pCG86). Thirty-one mutants of KL320(pCG86) that produced increased amounts of extracellular LT were isolated. These hypertoxinogenic (htx) mutants were assigned to four phenotypically distinct classes based on the amounts of cell-associated and extracellular LT in early-stationary-phase cultures. Type 1 and type 2 htx mutants produced significantly increased amounts of cell-associated LT. Type 3 and type 4 htx mutants produced normal or decreased amounts of cell-associated LT was similar to that of the wild type. In the mutants of types 1, 3, and 4, the ratios of extracellular to cell-associated LT were higher than that of the wild type and were characteristic for each strain. Cell lysis or leakage of macromolecular cytoplasmic constituents appeared to be significant for release of LT by mutants of types 1, 3, and 4, because supernatants from cultures of these mutants also contained increased amounts of protein and of the cytoplasmic enzyme glucose 6-phosphate dehydrogenase. In all four representative htx mutants, the hypertoxinogenic phenotypes were dependent on chromosomal mutations. The resident pCG86 plasmids were eliminated from the htx mutants of types 2 and 3. After wild-type plasmid pCG86 was introduced into the cured strains by conjugation, their hypertoxinogenic phenotypes were restored. We conclude that chromosomal loci in E. coli KL320 are important in regulating expression of the LT structural genes of plasmid pCG86. Images PMID:7019086

  20. PRECONDITIONED CONJUGATE-GRADIENT 2 (PCG2), a computer program for solving ground-water flow equations

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    This report documents PCG2 : a numerical code to be used with the U.S. Geological Survey modular three-dimensional, finite-difference, ground-water flow model . PCG2 uses the preconditioned conjugate-gradient method to solve the equations produced by the model for hydraulic head. Linear or nonlinear flow conditions may be simulated. PCG2 includes two reconditioning options : modified incomplete Cholesky preconditioning, which is efficient on scalar computers; and polynomial preconditioning, which requires less computer storage and, with modifications that depend on the computer used, is most efficient on vector computers . Convergence of the solver is determined using both head-change and residual criteria. Nonlinear problems are solved using Picard iterations. This documentation provides a description of the preconditioned conjugate gradient method and the two preconditioners, detailed instructions for linking PCG2 to the modular model, sample data inputs, a brief description of PCG2, and a FORTRAN listing.

  1. Iterative methods for the WLS state estimation on RISC, vector, and parallel computers

    SciTech Connect

    Nieplocha, J.; Carroll, C.C.

    1993-10-01

    We investigate the suitability and effectiveness of iterative methods for solving the weighted-least-square (WLS) state estimation problem on RISC, vector, and parallel processors. Several of the most popular iterative methods are tested and evaluated. The best performing preconditioned conjugate gradient (PCG) is very well suited for vector and parallel processing as is demonstrated for the WLS state estimation of the IEEE standard test systems. A new sparse matrix format for the gain matrix improves vector performance of the PCG algorithm and makes it competitive to the direct solver. Internal parallelism in RISC processors, used in current multiprocessor systems, can be taken advantage of in an implementation of this algorithm.

  2. Parallelization of Four-Component Calculations. I. Integral Generation, SCF, and Four-Index Transformation in the Dirac-Fock Package MOLFDIR.

    SciTech Connect

    Pernpointner, M.; Visscher, Lucas; De Jong, Wibe A.; Broer, R.

    2000-10-01

    The treatment of relativity and electron correlation on an equal footing is essential for the computation of systems containing heavy elements. Correlation treatments that are based on four-component Dirac-Hartree-Fock calculations presently provide the most accurate, albeit costly, way of taking relativity into account. The requirement of having two expansion basis sets for the molecular wave function puts a high demand on computer resources. The treatment of larger systems is thereby often prohibited by the very large run times and files that arise in a conventional Dirac-Hartree-Fock approach. A possible solution for this bottleneck is a parallel approach that not only reduces the turnaround time but also spreads out the large files over a number of local disks. Here, we present a distributed-memory parallelization of the program package MOLFDIR for the integral generation, Dirac-Hartree-Fock and four-index MS transformation steps. This implementation scales best for large AO spaces and moderately sized active spaces.

  3. Infrared detection of exposed Carbon Dioxide ice on 67P/CG nucleus surface by Rosetta-VIRTIS

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Raponi, Andrea; Capaccioni, Fabrizio; Barucci, Maria Antonietta; De Sanctis, Maria Cristina; Fornasier, Sonia; Ciarniello, Mauro; Migliorini, Alessandra; Erard, Stephane; Bockelee-Morvan, Dominique; Leyrat, Cedric; Tosi, Federico; Piccioni, Giuseppe; Palomba, Ernesto; Capria, Maria Teresa; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Taylor, Fred W.; Kappel, David

    2016-04-01

    In the period August 2014 - early May 2015 the heliocentric distance of the nucleus of 67P/CG decreased from 3.62 to 1.71 AU and the subsolar point moved towards the southern hemisphere. We investigated the IR spectra obtained by the Rosetta/VIRTIS instrument close to the newly illuminated regions, where colder conditions were present and consequently higher chances to observe highly volatility ices than water. We report about the discovery of CO2 ice identified in a region of the nucleus that recently passed through terminator. The quantitative abundance has been determined by means of spectral modeling of H2O-CO2 icy grains mixed to dark terrains as done in Filacchione et al., Nature, 10.1038/nature16190. The CO2 ice has been identified in an area in Anhur with abundance reaching up to 1.6% mixed with dark terrain. It is interesting to note that CO2 ice has been observed only for a short transient period of time, possibly demonstrating the seasonal nature of the presence of CO2 at the surface. A parallel study on the water and carbon dioxide gaseous emissions in the coma above this volatile-rich area is reported by Migliorini et al., this conference.

  4. The global surface composition of 67P/CG nucleus by Rosetta/VIRTIS. (I) Prelanding mission phase

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; Tosi, Federico; De Sanctis, Maria Cristina; Erard, Stéphane; Morvan, Dominique Bockelée; Leyrat, Cedric; Arnold, Gabriele; Schmitt, Bernard; Quirico, Eric; Piccioni, Giuseppe; Migliorini, Alessandra; Capria, Maria Teresa; Palomba, Ernesto; Cerroni, Priscilla; Longobardo, Andrea; Barucci, Antonella; Fornasier, Sonia; Carlson, Robert W.; Jaumann, Ralf; Stephan, Katrin; Moroz, Lyuba V.; Kappel, David; Rousseau, Batiste; Fonti, Sergio; Mancarella, Francesca; Despan, Daniela; Faure, Mathilde

    2016-08-01

    . The parallel coordinates method (Inselberg [1985] Vis. Comput., 1, 69-91) has been used to identify associations between average values of the spectral indicators and the properties of the geomorphological units as defined by (Thomas et al., [2015] Science, 347, 6220) and (El-Maarr et al., [2015] Astron. Astrophys., 583, A26). Three classes have been identified (smooth/active areas, dust covered areas and depressions), which can be clustered on the basis of the 3.2 μm organic material's band depth, while consolidated terrains show a high variability of the spectral properties resulting being distributed across all three classes. These results show how the spectral variability of the nucleus surface is more variegated than the morphological classes and that 67P/CG surface properties are dynamical, changing with the heliocentric distance and with activity processes.

  5. The internal density distribution of comet 67P/C-G based on 3D models

    NASA Astrophysics Data System (ADS)

    Jorda, Laurent; Hviid, Stubbe; Capanna, Claire; Gaskell, Robert; Gutierrez, Pedro; Preusker, Frank; Rodionov, Sergey; Scholten, Frank

    2016-04-01

    The OSIRIS camera aboard the Rosetta spacecraft observed the nucleus of comet 67P/C-G from the mapping phase in summer 2014 until now. The images have allowed the reconstruction in three-dimension of nucleus surface with stereophotogrammetry (Preusker et al., Astron. Astrophys.) and stereophotoclinometry (Jorda et al., submitted to Icarus) techniques. We use the reconstructed models to constrain the internal density distribution based on: (i) the measurement of the offset between the center of mass and center of figure of the object, and (ii) the assumption that flat areas observed at the surface of the comet correspond to iso-gravity surfaces. The results of our analysis will be presented, and the consequences for the internal structure and formation of the nucleus of comet 67P/C-G will be discussed.

  6. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.

    2012-06-01

    BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods

  7. PcG Proteins, DNA Methylation, and Gene Repression by Chromatin Looping

    PubMed Central

    Tiwari, Vijay K; McGarvey, Kelly M; Licchesi, Julien D.F; Ohm, Joyce E; Herman, James G; Schübeler, Dirk; Baylin, Stephen B

    2008-01-01

    Many DNA hypermethylated and epigenetically silenced genes in adult cancers are Polycomb group (PcG) marked in embryonic stem (ES) cells. We show that a large region upstream (∼30 kb) of and extending ∼60 kb around one such gene, GATA-4, is organized—in Tera-2 undifferentiated embryonic carcinoma (EC) cells—in a topologically complex multi-loop conformation that is formed by multiple internal long-range contact regions near areas enriched for EZH2, other PcG proteins, and the signature PcG histone mark, H3K27me3. Small interfering RNA (siRNA)–mediated depletion of EZH2 in undifferentiated Tera-2 cells leads to a significant reduction in the frequency of long-range associations at the GATA-4 locus, seemingly dependent on affecting the H3K27me3 enrichments around those chromatin regions, accompanied by a modest increase in GATA-4 transcription. The chromatin loops completely dissolve, accompanied by loss of PcG proteins and H3K27me3 marks, when Tera-2 cells receive differentiation signals which induce a ∼60-fold increase in GATA-4 expression. In colon cancer cells, however, the frequency of the long-range interactions are increased in a setting where GATA-4 has no basal transcription and the loops encompass multiple, abnormally DNA hypermethylated CpG islands, and the methyl-cytosine binding protein MBD2 is localized to these CpG islands, including ones near the gene promoter. Removing DNA methylation through genetic disruption of DNA methyltransferases (DKO cells) leads to loss of MBD2 occupancy and to a decrease in the frequency of long-range contacts, such that these now more resemble those in undifferentiated Tera-2 cells. Our findings reveal unexpected similarities in higher order chromatin conformation between stem/precursor cells and adult cancers. We also provide novel insight that PcG-occupied and H3K27me3-enriched regions can form chromatin loops and physically interact in cis around a single gene in mammalian cells. The loops associate with a

  8. Discrete wavelet-aided delineation of PCG signal events via analysis of an area curve length-based decision statistic.

    PubMed

    Homaeinezhad, M R; Atyabi, S A; Daneshvar, E; Ghaffari, A; Tahmasebi, M

    2010-12-01

    The aim of this study is to describe a robust unified framework for segmentation of the phonocardiogram (PCG) signal sounds based on the false-alarm probability (FAP) bounded segmentation of a properly calculated detection measure. To this end, first the original PCG signal is appropriately pre-processed and then, a fixed sample size sliding window is moved on the pre-processed signal. In each slid, the area under the excerpted segment is multiplied by its curve-length to generate the Area Curve Length (ACL) metric to be used as the segmentation decision statistic (DS). Afterwards, histogram parameters of the nonlinearly enhanced DS metric are used for regulation of the α-level Neyman-Pearson classifier for FAP-bounded delineation of the PCG events. The proposed method was applied to all 85 records of Nursing Student Heart Sounds database (NSHSDB) including stenosis, insufficiency, regurgitation, gallop, septal defect, split sound, rumble, murmur, clicks, friction rub and snap disorders with different sampling frequencies. Also, the method was applied to the records obtained from an electronic stethoscope board designed for fulfillment of this study in the presence of high-level power-line noise and external disturbing sounds and as the results, no false positive (FP) or false negative (FN) errors were detected. High noise robustness, acceptable detection-segmentation accuracy of PCG events in various cardiac system conditions, and having no parameters dependency to the acquisition sampling frequency can be mentioned as the principal virtues and abilities of the proposed ACL-based PCG events detection-segmentation algorithm. PMID:21181267

  9. PCG: A prototype incremental compilation facility for the SAGA environment, appendix F

    NASA Technical Reports Server (NTRS)

    Kimball, Joseph John

    1985-01-01

    A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.

  10. Growing protein crystals in microgravity - The NASA Microgravity Science and Applications Division (MSAD) Protein Crystal Growth (PCG) program

    NASA Technical Reports Server (NTRS)

    Herren, B.

    1992-01-01

    In collaboration with a medical researcher at the University of Alabama at Birmingham, NASA's Marshall Space Flight Center in Huntsville, Alabama, under the sponsorship of the Microgravity Science and Applications Division (MSAD) at NASA Headquarters, is continuing a series of space experiments in protein crystal growth which could lead to innovative new drugs as well as basic science data on protein molecular structures. From 1985 through 1992, Protein Crystal Growth (PCG) experiments will have been flown on the Space Shuttle a total of 14 times. The first four hand-held experiments were used to test hardware concepts; later flights incorporated these concepts for vapor diffusion protein crystal growth with temperature control. This article provides an overview of the PCG program: its evolution, objectives, and plans for future experiments on NASA's Space Shuttle and Space Station Freedom.

  11. Reptin and Pontin function antagonistically with PcG and TrxG complexes to mediate Hox gene control

    PubMed Central

    Diop, Soda Balla; Bertaux, Karine; Vasanthi, Dasari; Sarkeshik, Ali; Goirand, Benjamin; Aragnol, Denise; Tolwinski, Nicholas S; Cole, Michael D; Pradel, Jacques; Yates, John R; Mishra, Rakesh K; Graba, Yacine; Saurin, Andrew J

    2008-01-01

    Pontin (Pont) and Reptin (Rept) are paralogous ATPases that are evolutionarily conserved from yeast to human. They are recruited in multiprotein complexes that function in various aspects of DNA metabolism. They are essential for viability and have antagonistic roles in tissue growth, cell signalling and regulation of the tumour metastasis suppressor gene, KAI1, indicating that the balance of Pont and Rept regulates epigenetic programmes critical for development and cancer progression. Here, we describe Pont and Rept as antagonistic mediators of Drosophila Hox gene transcription, functioning with Polycomb group (PcG) and Trithorax group proteins to maintain correct patterns of expression. We show that Rept is a component of the PRC1 PcG complex, whereas Pont purifies with the Brahma complex. Furthermore, the enzymatic functions of Rept and Pont are indispensable for maintaining Hox gene expression states, highlighting the importance of these two antagonistic factors in transcriptional output. PMID:18259215

  12. Electronic Packaging Techniques

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A characteristic of aerospace system design is that equipment size and weight must always be kept to a minimum, even in small components such as electronic packages. The dictates of spacecraft design have spawned a number of high-density packaging techniques, among them methods of connecting circuits in printed wiring boards by processes called stitchbond welding and parallel gap welding. These processes help designers compress more components into less space; they also afford weight savings and lower production costs.

  13. Jpetra Kernel Package

    SciTech Connect

    Heroux, Michael A.

    2004-03-01

    A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs, written in Java. Jpetra is intended to provide the foundation for basic matrix and vector operations for Java developers. Jpetra provides distributed memory operations via an abstract parallel machine interface. The most common implementation of this interface will be Java sockets.

  14. Monitoring Comet 67P/C-G Micrometer Dust Flux: GIADA onboard Rosetta.

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Sordini, Roberto; Lucarelli, Francesca; Zakharov, Vladimir; Fulle, Marco; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    (21)ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna The MicroBalance System (MBS) is one of the three measurement subsystems of GIADA, the Grain Impact Analyzer and Dust Accumulator on board the Rosetta/ESA spacecraft (S/C). It consists of five Quartz Crystal Microbalances (QCMs) in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. The MBS is continuously monitoring comet 67P/CG since the beginning of May 2014. During the first 4 months of measurements, before the insertion of the S/C in the bound orbit phase, there were no evidences of dust accumulation on the QCMs. Starting from the beginning of October, three out of five QCMs measured an increase of the deposited dust. The measured fluxes show, as expected, a strong anisotropy. In particular, the dust flux appears to be much higher from the Sun direction with respect to the comet direction. Acknowledgment: GIADA was built by a consortum led by the Univ. Napoli "Parthenope" & INAF- Oss. Astr. Capodimonte, in collaboration with the Inst. de Astrofisica de Andalucia, Selex-ES, FI and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with the support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developed from a PI proposal from the University of Kent; sci. & tech. contribution were provided by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project/ESTEC for their out-standing work. Science support provided was by NASA through the US Rosetta Project managed by the Jet Propulsion Laboratory/ California Institute of Technology. GIADA calibrated data will be available through ESA's PSA web site (www.rssd.esa.int/index.php? project=PSA&page=in dex). We would like to thank Angioletta

  15. Enhanced growth of endothelial precursor cells on PCG-matrix facilitates accelerated, fibrosis-free, wound healing: a diabetic mouse model.

    PubMed

    Kanitkar, Meghana; Jaiswal, Amit; Deshpande, Rucha; Bellare, Jayesh; Kale, Vaijayanti P

    2013-01-01

    Diabetes mellitus (DM)-induced endothelial progenitor cell (EPC) dysfunction causes impaired wound healing, which can be rescued by delivery of large numbers of 'normal' EPCs onto such wounds. The principal challenges herein are (a) the high number of EPCs required and (b) their sustained delivery onto the wounds. Most of the currently available scaffolds either serve as passive devices for cellular delivery or allow adherence and proliferation, but not both. This clearly indicates that matrices possessing both attributes are 'the need of the day' for efficient healing of diabetic wounds. Therefore, we developed a system that not only allows selective enrichment and expansion of EPCs, but also efficiently delivers them onto the wounds. Murine bone marrow-derived mononuclear cells (MNCs) were seeded onto a PolyCaprolactone-Gelatin (PCG) nano-fiber matrix that offers a combined advantage of strength, biocompatibility wettability; and cultured them in EGM2 to allow EPC growth. The efficacy of the PCG matrix in supporting the EPC growth and delivery was assessed by various in vitro parameters. Its efficacy in diabetic wound healing was assessed by a topical application of the PCG-EPCs onto diabetic wounds. The PCG matrix promoted a high-level attachment of EPCs and enhanced their growth, colony formation, and proliferation without compromising their viability as compared to Poly L-lactic acid (PLLA) and Vitronectin (VN), the matrix and non-matrix controls respectively. The PCG-matrix also allowed a sustained chemotactic migration of EPCs in vitro. The matrix-effected sustained delivery of EPCs onto the diabetic wounds resulted in an enhanced fibrosis-free wound healing as compared to the controls. Our data, thus, highlight the novel therapeutic potential of PCG-EPCs as a combined 'growth and delivery system' to achieve an accelerated fibrosis-free healing of dermal lesions, including diabetic wounds. PMID:23922871

  16. The impact of Polycomb group (PcG) and Trithorax group (TrxG) epigenetic factors in plant plasticity.

    PubMed

    de la Paz Sanchez, Maria; Aceves-García, Pamela; Petrone, Emilio; Steckenborn, Stefan; Vega-León, Rosario; Álvarez-Buylla, Elena R; Garay-Arroyo, Adriana; García-Ponce, Berenice

    2015-11-01

    Current advances indicate that epigenetic mechanisms play important roles in the regulatory networks involved in plant developmental responses to environmental conditions. Hence, understanding the role of such components becomes crucial to understanding the mechanisms underlying the plasticity and variability of plant traits, and thus the ecology and evolution of plant development. We now know that important components of phenotypic variation may result from heritable and reversible epigenetic mechanisms without genetic alterations. The epigenetic factors Polycomb group (PcG) and Trithorax group (TrxG) are involved in developmental processes that respond to environmental signals, playing important roles in plant plasticity. In this review, we discuss current knowledge of TrxG and PcG functions in different developmental processes in response to internal and environmental cues and we also integrate the emerging evidence concerning their function in plant plasticity. Many such plastic responses rely on meristematic cell behavior, including stem cell niche maintenance, cellular reprogramming, flowering and dormancy as well as stress memory. This information will help to determine how to integrate the role of epigenetic regulation into models of gene regulatory networks, which have mostly included transcriptional interactions underlying various aspects of plant development and its plastic response to environmental conditions. PMID:26037337

  17. Block-bordered diagonalization and parallel iterative solvers

    SciTech Connect

    Alvarado, F.; Dag, H.; Bruggencate, M. ten

    1994-12-31

    One of the most common techniques for enhancing parallelism in direct sparse matrix methods is the reorganization of a matrix into a blocked-bordered structure. Incomplete LDU factorization is a very good preconditioner for PCG in serial environments. However, the inherent sequential nature of the preconditioning step makes it less desirable in parallel environments. This paper explores the use of BBD (Blocked Bordered Diagonalization) in connection with ILU preconditioners. The paper shows that BBD-based ILU preconditioners are quite amenable to parallel processing. Neglecting entries from the entire border can result in a blocked diagonal matrix. The result is a great increase in parallelism at the expense of additional iterations. Experiments on the Sequent Symmetry shared memory machine using (mostly) power system that matrices indicate that the method is generally better than conventional ILU preconditioners and in many cases even better than partitioned inverse preconditioners, without the initial setup disadvantages of partitioned inverse preconditioners.

  18. GIADA on-board Rosetta: comet 67P/C-G dust coma characterization

    NASA Astrophysics Data System (ADS)

    Rotundi, Alessandra; Della Corte, Vincenzo; Fulle, Marco; Sordini, Roberto; Ivanovski, Stavro; Accolla, Mario; Ferrari, Marco; Lucarelli, Francesca; Zakharov, Vladimir; Mazzotta Epifani, Elena; López-Moreno, José J.; Rodríguez, Julio; Colangeli, Luigi; Palumbo, Pasquale; Bussoletti, Ezio; Crifo, Jean-Francois; Esposito, Francesca; Green, Simon F.; Grün, Eberhard; Lamy, Philippe L.

    2015-04-01

    21ESA-ESAC, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spagna GIADA consists of three subsystems: 1) the Grain Detection System (GDS) to detect dust grains as they pass through a laser curtain, 2) the Impact Sensor (IS) to measure grain momentum derived from the impact on a plate connected to five piezoelectric sensors, and 3) the MicroBalances System (MBS); five quartz crystal microbalances in roughly orthogonal directions providing the cumulative dust flux of grains smaller than 10 microns. GDS provides data on grain speed and its optical cross section. The IS grain momentum measurement, when combined with the GDS detection time, provides a direct measurement of grain speed and mass. These combined measurements characterize single grain dust dynamics in the coma of 67P/CG. No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such high heliocentric distances are available up to date. We present here the results obtained by GIADA, which began operating in continuous mode on 18 July 2014 when the comet was at a heliocentric distance of 3.7 AU. The first grain detection occurred when the spacecraft was 814 km from the nucleus on 1 August 2014. From August the 1st up to December the 11th, GIADA detected more than 800 grains, for which the 3D spatial distribution was determined. About 700 out of 800 are GDS only detections: "dust clouds", i.e. slow dust grains (≈ 0.5 m/s) crossing the laser curtain very close in time (e.g. 129 grains in 11 s), probably fluffy grains. IS only detections are about 70, i.e. ≈ 1/10 of the GDS only. This ratio is quite different from what we got for the early detections (August - September) when the ration was ≈ 3, suggesting the presence of different types of particle (bigger, brighter, less dense).The combined GDS+IS detections, i.e. measured by both the GDS and IS detectors, are about 70 and allowed us to extract the

  19. Epigenetic chromatin modifiers in barley: IV. The study of barley Polycomb group (PcG) genes during seed development and in response to external ABA

    PubMed Central

    2010-01-01

    Background Epigenetic phenomena have been associated with the regulation of active and silent chromatin states achieved by modifications of chromatin structure through DNA methylation, and histone post-translational modifications. The latter is accomplished, in part, through the action of PcG (Polycomb group) protein complexes which methylate nucleosomal histone tails at specific sites, ultimately leading to chromatin compaction and gene silencing. Different PcG complex variants operating during different developmental stages have been described in plants. In particular, the so-called FIE/MEA/FIS2 complex governs the expression of genes important in embryo and endosperm development in Arabidopsis. In our effort to understand the epigenetic mechanisms regulating seed development in barley (Hordeum vulgare), an agronomically important monocot plant cultivated for its endosperm, we set out to characterize the genes encoding barley PcG proteins. Results Four barley PcG gene homologues, named HvFIE, HvE(Z), HvSu(z)12a, and HvSu(z)12b were identified and structurally and phylogenetically characterized. The corresponding genes HvFIE, HvE(Z), HvSu(z)12a, and HvSu(z)12b were mapped onto barley chromosomes 7H, 4H, 2H and 5H, respectively. Expression analysis of the PcG genes revealed significant differences in gene expression among tissues and seed developmental stages and between barley cultivars with varying seed size. Furthermore, HvFIE and HvE(Z) gene expression was responsive to the abiotic stress-related hormone abscisic acid (ABA) known to be involved in seed maturation, dormancy and germination. Conclusion This study reports the first characterization of the PcG homologues, HvFIE, HvE(Z), HvSu(z)12a and HvSu(z)12b in barley. All genes co-localized with known chromosomal regions responsible for malting quality related traits, suggesting that they might be used for developing molecular markers to be applied in marker assisted selection. The PcG differential expression

  20. Scoring Package

    National Institute of Standards and Technology Data Gateway

    NIST Scoring Package (PC database for purchase)   The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.

  1. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  2. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  3. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  4. Monitoring 67P/C-G coma dust environment from 3.6 AU in-bound to the Sun to 2 AU out-bound

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Rotundi, Alessandra; Fulle, Marco

    2016-04-01

    GIADA, on board the Rosetta/ESA space mission is an instrument devoted to monitor the dynamical and physical properties of the dust particles emitted by comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) along its orbit, from 3.6 AU in-bound to the Sun to 2 AU out-bound. Since the 17th of July 2014 GIADA is fully operative and was able to measure the speed and mass of individual dust particles. GIADA capability of detecting dust particles with an high time resolution and the accurate characterization of the physical properties of each detected particle allowed the identification of two different families of dust particles emitted by 67P/C-G nucleus: compact particles with densities varying from about 100 kg/m3 to 3000 kg/m3 and the fluffy particles with densities down to 1kg/m^3. GIADA continuous monitoring of the coma dust environment of comet 67P/C-G along its orbit, accounted for the different geometry of the observation along Rosetta trajectories, enabled us to: 1) investigate how the dust fluxes for each particle family evolves with respect to the heliocentric distance; 2) identify the nucleus/coma regions with high dust emission/density; 3) observe the changes that this regions undergo along the comet orbit; 4) measure and monitor the dust production rate; and, 5) evaluate the 67P/C-G dust to gas ratio by coupling GIADA measurements with the results of the Rosetta instruments devoted to gas measurements (MIRO and ROSINA).

  5. Packaging Materials

    NASA Astrophysics Data System (ADS)

    Frear, Darrel

    This chapter is a high-level overview of the materials used in an electronic package including: metals used as conductors in the package, ceramics and glasses used as dielectrics or insulators and polymers used as insulators and, in a composite form, as conductors. There is a need for new materials to meet the ever-changing requirements for high-speed digital and radio-frequency (RF) applications. There are different requirements for digital and RF packages that translate into the need for unique materials for each application. The interconnect and dielectric (insulating) requirements are presented for each application and the relevant materials properties and characteristics are discussed. The fundamental materials characteristics are: dielectric constant, dielectric loss, thermal and electric conductivity, resistivity, moisture absorption, glass-transition temperature, strength, time-dependent deformation (creep), and fracture toughness. The materials characteristics and properties are dependant on how they are processed to form the electronic package so the fundamentals of electronic packaging processes are discussed including wirebonding, solder interconnects, flip-chip interconnects, underfill for flip chip and overmolding. The relevant materials properties are given along with requirements (including environmentally friendly Pb-free packages) that require new materials to be developed to meet future electronics needs for both digital and RF applications.

  6. Application of Russian Thermo-Electric Devices (TEDS) for the US Microgravity Program Protein Crystal Growth (PCG) Project

    NASA Technical Reports Server (NTRS)

    Aksamentov, Valery

    1996-01-01

    Changes in the former Soviet Union have opened the gate for the exchange of new technology. Interest in this work has been particularly related to Thermal Electric Cooling Devices (TED's) which have an application for the Thermal Enclosure System (TES) developed by NASA. Preliminary information received by NASA/MSFC indicates that Russian TED's have higher efficiency. Based on that assumption NASA/MSFC awarded a contract to the University of Alabama in Huntsville (UAH) in order to study the Russian TED's technology. In order to fulfill this a few steps should be made: (1) potential specifications and configurations should be defined for use of TED's in Protein Crystal Growing (PCG) thermal control hardware; and (2) work closely with the identified Russian source to define and identify potential Russian TED's to exceed the performance of available domestic TED's. Based on the data from Russia, it is possible to make plans for further steps such as buying and testing high performance TED's. To accomplish this goal two subcontracts have been released. One subcontract to Automated Sciences Group (ASG) located in Huntsville, AL and one to the International Center for Advanced Studies 'Cosmos' located in Moscow, Russia.

  7. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  8. The Kull IMC package

    SciTech Connect

    Gentile, N A; Keen,N; Rathkopf, J

    1998-10-01

    We describe the Kull IMC package, and Implicit Monte Carlo Program written for use in A and X division radiation hydro codes. The Kull IMC has been extensively tested. Written in C++ and using genericity via the template feature to allow easy integration into different codes, the Kull IMC currently runs coupled radiation hydrodynamic problems in 2 different 3D codes. A stand-alone version also exists, which has been parallelized with mesh replication. This version has been run on up to 384 processors on ASCI Blue Pacific.

  9. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  10. Global and Spatially Resolved Photometric Properties of the Nucleus of Comet 67P/C-G from OSIRIS Images

    NASA Astrophysics Data System (ADS)

    Lamy, P.

    2014-04-01

    Following the successful wake-up of the ROSETTA spacecraft on 20 January 2014, the OSIRIS imaging system was fully re-commissioned at the end of March 2014 confirming its initial excellent performances. The OSIRIS instrument includes two cameras: the Narrow Angle Camera (NAC) and the Wide Angle Camera (WAC) with respective fieldsofview of 2.2° and 12°, both equipped with 2K by 2K CCD detectors and dual filter wheels. The NAC filters allow a spectral coverage of 270 to 990 nm tailored to the investigation of the mineralogical composition of the nucleus of comet P/Churyumov- Gerasimenko whereas those of the WAC (245-632 nm) aim at characterizing its coma [1]. The NAC has already secured a set of four complete light curves of the nucleus of 67P/C-G between 3 March and 24 April 2014 with a primary purpose of characterizing its rotational state. A preliminary spin period of 12.4 hours has been obtained, similar to its very first determination from a light curve obtained in 2003 with the Hubble space telescope [2]. The NAC and WAC will be recalibrated in the forthcoming weeks using the same stellar calibrators VEGA and the solar analog 16 Cyg B as for past inflight calibration campaigns in support of the flybys of asteroids Steins and Lutetia. This will allow comparing the pre- and post-hibernation performances of the cameras and correct the quantum efficiency response of the two CCD and the throughput for all channels (i.e., filters) if required. The accurate photometric analysis of the images requires utmost care due to several instrumental problems, the most severe and complex to handle being the presence of optical ghosts which result from multiple reflections on the two filters inserted in the optical beam and on the thick window which protects the CCD detector from cosmic ray impacts. These ghosts prominently appear as either slightly defocused images offset from the primary images or large round or elliptical halos. We will first present results on the global

  11. Rosetta/VIRTIS-M spectral data: Comet 67P/CG compared to other primitive small bodies.

    NASA Astrophysics Data System (ADS)

    De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Erard, S.; Tosi, F.; Ciarniello, M.; Raponi, A.; Piccioni, G.; Leyrat, C.; Bockelée-Morvan, D.; Drossart, P.; Fornasier, S.

    2014-12-01

    VIRTIS-M, the Visible InfraRed Thermal Imaging Spectrometer, onboard the Rosetta Mission orbiter (Coradini et al., 2007) acquired data of the comet 67P/Churyumov-Gerasimenko in the 0.25-5.1 µm spectral range. The initial data, obtained during the first mission phases to the comet, allow us to derive albedo and global spectral properties of the comet nucleus as well as spectra of different areas on the nucleus. The characterization of cometary nuclei surfaces and their comparison with those of related populations such as extinct comet candidates, Centaurs, near-Earth asteroids (NEAs), trans-Neptunian objects (TNOs), and primitive asteroids is critical to understanding the origin and evolution of small solar system bodies. The acquired VIRTIS data are used to compare the global spectral properties of comet 67P/CG to published spectra of other cometary nuclei observed from ground or visited by space mission. Moreover, the spectra of 67P/Churyumov-Gerasimenko are also compared to those of primitive asteroids and centaurs. The comparison can give us clues on the possible common formation and evolutionary environment for primitive asteroids, centaurs and Jupiter-family comets. Authors acknowledge the funding from Italian and French Space Agencies. References: Coradini, A., Capaccioni, F., Drossart, P., Arnold, G., Ammannito, E., Angrilli, F., Barucci, A., Bellucci, G., Benkhoff, J., Bianchini, G., Bibring, J. P., Blecka, M., Bockelee-Morvan, D., Capria, M. T., Carlson, R., Carsenty, U., Cerroni, P., Colangeli, L., Combes, M., Combi, M., Crovisier, J., De Sanctis, M. C., Encrenaz, E. T., Erard, S., Federico, C., Filacchione, G., Fink, U., Fonti, S., Formisano, V., Ip, W. H., Jaumann, R., Kuehrt, E., Langevin, Y., Magni, G., McCord, T., Mennella, V., Mottola, S., Neukum, G., Palumbo, P., Piccioni, G., Rauer, H., Saggin, B., Schmitt, B., Tiphene, D., Tozzi, G., Space Science Reviews, Volume 128, Issue 1-4, 529-559, 2007.

  12. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2009-05-27

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-Gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing Shielded Container Payload Assembly; 1.7, Preparing SWB Payload Assembly; and 1.8, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence, except as noted.

  13. CH Packaging Operations Manual

    SciTech Connect

    None, None

    2008-09-11

    This document provides the user with instructions for assembling a payload. All the steps in Subsections 1.2, Preparing 55-Gallon Drum Payload Assembly; 1.3, Preparing "Short" 85-Gallon Drum Payload Assembly (TRUPACT-II and HalfPACT); 1.4, Preparing "Tall" 85-gallon Drum Payload Assembly (HalfPACT only); 1.5, Preparing 100-Gallon Drum Payload Assembly; 1.6, Preparing SWB Payload Assembly; and 1.7, Preparing TDOP Payload Assembly, must be completed, but may be performed in any order as long as radiological control steps are not bypassed. Transport trailer operations, package loading and unloading from transport trailers, hoisting and rigging activities such as ACGLF operations, equipment checkout and shutdown, and component inspection activities must be performed, but may be performed in any order and in parallel with other activities as long as radiological control steps are not bypassed. Steps involving OCA/ICV lid removal/installation and payload removal/loading may be performed in parallel if there are multiple operators working on the same packaging. Steps involving removal/installation of OCV/ICV upper and lower main O-rings must be performed in sequence.

  14. Packaged Food

    NASA Technical Reports Server (NTRS)

    1976-01-01

    After studies found that many elderly persons don't eat adequately because they can't afford to, they have limited mobility, or they just don't bother, Innovated Foods, Inc. and JSC developed shelf-stable foods processed and packaged for home preparation with minimum effort. Various food-processing techniques and delivery systems are under study and freeze dried foods originally used for space flight are being marketed. (See 77N76140)

  15. EPIC: E-field Parallel Imaging Correlator

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Beardsley, Adam P.; Bowman, Judd D.; Morales, Miguel F.

    2015-11-01

    E-field Parallel Imaging Correlator (EPIC), a highly parallelized Object Oriented Python package, implements the Modular Optimal Frequency Fourier (MOFF) imaging technique. It also includes visibility-based imaging using the software holography technique and a simulator for generating electric fields from a sky model. EPIC can accept dual-polarization inputs and produce images of all four instrumental cross-polarizations.

  16. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  17. Increased expression of PcG protein YY1 negatively regulates B cell development while allowing accumulation of myeloid cells and LT-HSC cells.

    PubMed

    Pan, Xuan; Jones, Morgan; Jiang, Jie; Zaprazna, Kristina; Yu, Duonan; Pear, Warren; Maillard, Ivan; Atchison, Michael L

    2012-01-01

    Ying Yang 1 (YY1) is a multifunctional Polycomb Group (PcG) transcription factor that binds to multiple enhancer binding sites in the immunoglobulin (Ig) loci and plays vital roles in early B cell development. PcG proteins have important functions in hematopoietic stem cell renewal and YY1 is the only mammalian PcG protein with DNA binding specificity. Conditional knock-out of YY1 in the mouse B cell lineage results in arrest at the pro-B cell stage, and dosage effects have been observed at various YY1 expression levels. To investigate the impact of elevated YY1 expression on hematopoetic development, we utilized a mouse in vivo bone marrow reconstitution system. We found that mouse bone marrow cells expressing elevated levels of YY1 exhibited a selective disadvantage as they progressed from hematopoietic stem/progenitor cells to pro-B, pre-B, immature B and re-circulating B cell stages, but no disadvantage of YY1 over-expression was observed in myeloid lineage cells. Furthermore, mouse bone marrow cells expressing elevated levels of YY1 displayed enrichment for cells with surface markers characteristic of long-term hematopoietic stem cells (HSC). YY1 expression induced apoptosis in mouse B cell lines in vitro, and resulted in down-regulated expression of anti-apoptotic genes Bcl-xl and NFκB2, while no impact was observed in a mouse myeloid line. B cell apoptosis and LT-HSC enrichment induced by YY1 suggest that novel strategies to induce YY1 expression could have beneficial effects in the treatment of B lineage malignancies while preserving normal HSCs. PMID:22292011

  18. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  19. Reflective Packaging

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The aluminized polymer film used in spacecraft as a radiation barrier to protect both astronauts and delicate instruments has led to a number of spinoff applications. Among them are aluminized shipping bags, food cart covers and medical bags. Radiant Technologies purchases component materials and assembles a barrier made of layers of aluminized foil. The packaging reflects outside heat away from the product inside the container. The company is developing new aluminized lines, express mailers, large shipping bags, gel packs and insulated panels for the building industry.

  20. Search for regional variations of thermal and electrical properties of comet 67P/CG probed by MIRO/Rosetta

    NASA Astrophysics Data System (ADS)

    Leyrat, Cedric; Blain, Doriann; Lellouch, Emmanuel; von Allmen, Paul; Keihm, Stephen; Choukroun, Matthieu; Schloerb, Pete; Biver, Nicolas; Gulkis, Samuel; Hofstadter, Mark

    2015-11-01

    Since June 2014, The MIRO (Microwave Instrument for Rosetta Orbiter) on board the Rosetta (ESA) spacecraft observes comet 67P-CG along its heliocentric orbit from 3.25 AU to 1.24 AU. MIRO operates in millimeter and submillimeter wavelengths respectively at 190 GHz (1.56 mm) and 562 GHz (0.5 mm). While the submillimeter channel is coupled to a Chirp Transform Spectrometer (CTS) for spectroscopic analysis of the coma, both bands provide a broad-band continuum channel for sensing the thermal emission of the nucleus itself.Continuum measurements of the nucleus probe the subsurface thermal emission from two different depths. The first analysis (Schloerb et al., 2015) of data already obtained essentially in the Northern hemisphere have revealed large temperature variations with latitude, as well as distinct diurnal curves, most prominent in the 0.5 mm channel, indicating that the electric penetration depth for this channel is comparable to the diurnal thermal skin depth. Initial modelling of these data have indicated a low surface thermal inertia, in the range 10-30 J K-1 m-2 s-1/2 and probed depths of order 1-4 cm. We here investigate potential spatial variations of thermal and electrical properties by analysing separately the geomorphological regions described by Thomas et al. (2015). For each region, we select measurements corresponding to those areas, obtained at different local times and effective latitudes. We model the thermal profiles with depth and the outgoing mm and submm radiation for different values of the thermal inertia and of the ratio of the electrical to the thermal skin depth. We will present the best estimates of thermal inertia and electric/thermal depth ratios for each region selected. Additional information on subsurface temperature gradients may be inferred by using observations at varying emergence angles.The thermal emission from southern regions has been analysed by Choukroun et al (2015) during the polar night. Now that the comet has reached

  1. GIADA On-Board Rosetta: Early Dust Grain Detections and Dust Coma Characterization of Comet 67P/C-G

    NASA Astrophysics Data System (ADS)

    Rotundi, A.; Della Corte, V.; Accolla, M.; Ferrari, M.; Ivanovski, S.; Lucarelli, F.; Mazzotta Epifani, E.; Sordini, R.; Palumbo, P.; Colangeli, L.; Lopez-Moreno, J. J.; Rodriguez, J.; Fulle, M.; Bussoletti, E.; Crifo, J. F.; Esposito, F.; Green, S.; Grün, E.; Lamy, P. L.; McDonnell, T.; Mennella, V.; Molina, A.; Moreno, F.; Ortiz, J. L.; Palomba, E.; Perrin, J. M.; Rodrigo, R.; Weissman, P. R.; Zakharov, V.; Zarnecki, J.

    2014-12-01

    GIADA (Grain Impact Analyzer and Dust Accumulator) flying on-board Rosetta is devoted to study the cometary dust environment of 67P/Churiumov-Gerasimenko. GIADA is composed of 3 sub-systems: the GDS (Grain Detection System), based on grain detection through light scattering; an IS (Impact Sensor), giving momentum measurement detecting the impact on a sensed plate connected with 5 piezoelectric sensors; the MBS (MicroBalances System), constituted of 5 Quartz Crystal Microbalances (QCMs), giving cumulative deposited dust mass by measuring the variations of the sensors' frequency. The combination of the measurements performed by these 3 subsystems provides: the number, the mass, the momentum and the velocity distribution of dust grains emitted from the cometary nucleus.No prior in situ dust dynamical measurements at these close distances from the nucleus and starting from such large heliocentric distances are available up to date. We present here the first results obtained from the beginning of the Rosetta scientific phase. We will report dust grains early detection at about 800 km from the nucleus in August 2014 and the following measurements that allowed us characterizing the 67P/C-G dust environment at distances less than 100 km from the nucleus and single grains dynamical properties. Acknowledgements. GIADA was built by a consortium led by the Univ. Napoli "Parthenope" & INAF-Oss. Astr. Capodimonte, IT, in collaboration with the Inst. de Astrofisica de Andalucia, ES, Selex-ES s.p.a. and SENER. GIADA is presently managed & operated by Ist. di Astrofisica e Planetologia Spaziali-INAF, IT. GIADA was funded and managed by the Agenzia Spaziale Italiana, IT, with a support of the Spanish Ministry of Education and Science MEC, ES. GIADA was developped from a PI proposal supported by the University of Kent; sci. & tech. contribution given by CISAS, IT, Lab. d'Astr. Spat., FR, and Institutions from UK, IT, FR, DE and USA. We thank the RSGS/ESAC, RMOC/ESOC & Rosetta Project

  2. 67P/CG morphological units and VIS-IR spectral classes: a Rosetta/VIRTIS-M perspective

    NASA Astrophysics Data System (ADS)

    Filacchione, Gianrico; Capaccioni, Fabrizio; Ciarniello, Mauro; Raponi, Andrea; De Sanctis, Maria Cristina; Tosi, Federico; Piccioni, Giuseppe; Cerroni, Priscilla; Capria, Maria Teresa; Palomba, Ernesto; Longobardo, Andrea; Migliorini, Alessandra; Erard, Stephane; Arnold, Gabriele; Bockelee-Morvan, Dominique; Leyrat, Cedric; Schmitt, Bernard; Quirico, Eric; Barucci, Antonella; McCord, Thomas B.; Stephan, Katrin; Kappel, David

    2015-11-01

    VIRTIS-M, the 0.25-5.1 µm imaging spectrometer on Rosetta (Coradini et al., 2007), has mapped the surface of 67P/CG nucleus since July 2014 from a wide range of distances. Spectral analysis of global scale data indicate that the nucleus presents different terrains uniformly covered by a very dark (Ciarniello et al., 2015) and dehydrated organic-rich material (Capaccioni et al., 2015). The morphological units identified so far (Thomas et al., 2015; El-Maarry et al., 2015) include dust-covered brittle materials regions (like Ash, Ma'at), exposed material regions (Seth), large-scale depressions (like Hatmehit, Aten, Nut), smooth terrains units (like Hapi, Anubis, Imhotep) and consolidated surfaces (like Hathor, Anuket, Aker, Apis, Khepry, Bastet, Maftet). For each of these regions average VIRTIS-M spectra were derived with the aim to explore possible connections between morphology and spectral properties. Photometric correction (Ciarniello et al., 2015), thermal emission removal in the 3.5-5 micron range and georeferencing have been applied to I/F data in order to derive spectral indicators, e.g. VIS-IR spectral slopes, their crossing wavelength (CW) and the 3.2 µm organic material band’s depth (BD), suitable to identify and map compositional variations. Our analysis shows that smooth terrains have the lower slopes in VIS (<1.7E-3 1/µm) and IR (0.4E-3 1/µm), CW=0.75 µm and BD=8-12%. Intermediate VIS slope=1.7-1.9E-3 1/µm, and higher BD=10-12.8%, are typical of consolidated surfaces, some dust covered regions and Seth where the maximum BD=13% has been observed. Large-scale depressions and Imhotep are redder with a VIS slope of 1.9-2.1E-3 1/µm, CW at 0.85-0.9 µm and BD=8-11%. The minimum VIS-IR slopes are observed above the Hapi, in agreement with the presence of water ice sublimation and recondensation processes observed by VIRTIS in this region (De Sanctis et al., 2015).Authors acknowledge ASI, CNES, DLR and NASA financial support.References:-Coradini et al

  3. A parallel implementation of an EBE solver for the finite element method

    SciTech Connect

    Silva, R.P.; Las Casas, E.B.; Carvalho, M.L.B.

    1994-12-31

    A parallel implementation using PVM on a cluster of workstations of an Element By Element (EBE) solver using the Preconditioned Conjugate Gradient (PCG) method is described, along with an application in the solution of the linear systems generated from finite element analysis of a problem in three dimensional linear elasticity. The PVM (Parallel Virtual Machine) system, developed at the Oak Ridge Laboratory, allows the construction of a parallel MIMD machine by connecting heterogeneous computers linked through a network. In this implementation, version 3.1 of PVM is used, and 11 SLC Sun workstations and a Sun SPARC-2 model are connected through Ethernet. The finite element program is based on SDP, System for Finite Element Based Software Development, developed at the Brazilian National Laboratory for Scientific Computation (LNCC). SDP provides the basic routines for a finite element application program, as well as a standard for programming and documentation, intended to allow exchanges between research groups in different centers.

  4. Drosophila O-GlcNAc transferase (OGT) is encoded by the Polycomb group (PcG) gene, super sex combs (sxc)

    PubMed Central

    Sinclair, Donald A. R.; Syrzycka, Monika; Macauley, Matthew S.; Rastgardani, Tara; Komljenovic, Ivana; Vocadlo, David J.; Brock, Hugh W.; Honda, Barry M.

    2009-01-01

    O-linked N-acetylglucosamine transferase (OGT) reversibly modifies serine and threonine residues of many intracellular proteins with a single β-O-linked N-acetylglucosamine residue (O-GlcNAc), and has been implicated in insulin signaling, neurodegenerative disease, cellular stress response, and other important processes in mammals. OGT also glycosylates RNA polymerase II and various transcription factors, which suggests that it might be directly involved in transcriptional regulation. We report here that the Drosophila OGT is encoded by the Polycomb group (PcG) gene, super sex combs (sxc). Furthermore, major sites of O-GlcNAc modification on polytene chromosomes correspond to PcG protein binding sites. Our results thus suggest a direct role for O-linked glycosylation by OGT in PcG-mediated epigenetic gene silencing, which is important in developmental regulation, stem cell maintenance, genomic imprinting, and cancer. In addition, we observe rescue of sxc lethality by a human Ogt cDNA transgene; thus Drosophila may provide an ideal model to study important functional roles of OGT in mammals. PMID:19666537

  5. Drosophila O-GlcNAc transferase (OGT) is encoded by the Polycomb group (PcG) gene, super sex combs (sxc).

    PubMed

    Sinclair, Donald A R; Syrzycka, Monika; Macauley, Matthew S; Rastgardani, Tara; Komljenovic, Ivana; Vocadlo, David J; Brock, Hugh W; Honda, Barry M

    2009-08-11

    O-linked N-acetylglucosamine transferase (OGT) reversibly modifies serine and threonine residues of many intracellular proteins with a single beta-O-linked N-acetylglucosamine residue (O-GlcNAc), and has been implicated in insulin signaling, neurodegenerative disease, cellular stress response, and other important processes in mammals. OGT also glycosylates RNA polymerase II and various transcription factors, which suggests that it might be directly involved in transcriptional regulation. We report here that the Drosophila OGT is encoded by the Polycomb group (PcG) gene, super sex combs (sxc). Furthermore, major sites of O-GlcNAc modification on polytene chromosomes correspond to PcG protein binding sites. Our results thus suggest a direct role for O-linked glycosylation by OGT in PcG-mediated epigenetic gene silencing, which is important in developmental regulation, stem cell maintenance, genomic imprinting, and cancer. In addition, we observe rescue of sxc lethality by a human Ogt cDNA transgene; thus Drosophila may provide an ideal model to study important functional roles of OGT in mammals. PMID:19666537

  6. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  7. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

  8. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer`s task easier.

  9. Challenges in the Packaging of MEMS

    SciTech Connect

    Malshe, A.P.; Singh, S.B.; Eaton, W.P.; O'Neal, C.; Brown, W.D.; Miller, W.M.

    1999-03-26

    The packaging of Micro-Electro-Mechanical Systems (MEMS) is a field of great importance to anyone using or manufacturing sensors, consumer products, or military applications. Currently much work has been done in the design and fabrication of MEMS devices but insufficient research and few publications have been completed on the packaging of these devices. This is despite the fact that packaging is a very large percentage of the total cost of MEMS devices. The main difference between IC packaging and MEMS packaging is that MEMS packaging is almost always application specific and greatly affected by its environment and packaging techniques such as die handling, die attach processes, and lid sealing. Many of these aspects are directly related to the materials used in the packaging processes. MEMS devices that are functional in wafer form can be rendered inoperable after packaging. MEMS dies must be handled only from the chip sides so features on the top surface are not damaged. This eliminates most current die pick-and-place fixtures. Die attach materials are key to MEMS packaging. Using hard die attach solders can create high stresses in the MEMS devices, which can affect their operation greatly. Low-stress epoxies can be high-outgassing, which can also affect device performance. Also, a low modulus die attach can allow the die to move during ultrasonic wirebonding resulting to low wirebond strength. Another source of residual stress is the lid sealing process. Most MEMS based sensors and devices require a hermetically sealed package. This can be done by parallel seam welding the package lid, but at the cost of further induced stress on the die. Another issue of MEMS packaging is the media compatibility of the packaged device. MEMS unlike ICS often interface with their environment, which could be high pressure or corrosive. The main conclusion we can draw about MEMS packaging is that the package affects the performance and reliability of the MEMS devices. There is a

  10. Tpetra Kernel Package

    Energy Science and Technology Software Center (ESTSC)

    2004-03-01

    A package of classes for constructing and using distributed sparse and dense matrices, vectors and graphs. Templated on the scalar and ordinal types so that any valid floating-point type, as well as any valid integer type can be used with these classes. Other non-standard types, such as 3-by-3 matrices for the scalar type and mod-based integers for ordinal types, can also be used. Tpetra is intended to provide the foundation for basic matrix and vectormore » operations for the next generation of Trilinos preconditioners and solvers, It can be considered as the follow-on to Epetra. Tpetra provides distributed memory operations via an abstract parallel machine interface, The most common implementation of this interface will be MPI.« less

  11. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  12. Isorropia Partitioning and Load Balancing Package

    Energy Science and Technology Software Center (ESTSC)

    2006-09-01

    Isorropia is a partitioning and load balancing package which interfaces with the Zoltan library. Isorropia can accept input objects such as matrices and matrix-graphs, and repartition/redistribute them into a better data distribution on parallel computers. Isorropia is primarily an interface package, utilizing graph and hypergraph partitioning algorithms that are in the Zoltan library which is a third-party library to Tilinos.

  13. Package inspection using inverse diffraction

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.

    2008-08-01

    More efficient cost-effective hand-held methods of inspecting packages without opening them are in demand for security. Recent new work in TeraHertz sources,1 millimeter waves, presents new possibilities. Millimeter waves pass through cardboard and styrofoam, common packing materials, and also pass through most materials except those with high conductivity like metals which block light and are easily spotted. Estimating refractive index along the path of the beam through the package from observations of the beam passing out of the package provides the necessary information to inspect the package and is a nonlinear problem. So we use a generalized linear inverse technique that we first developed for finding oil by reflection in geophysics.2 The computation assumes parallel slices in the packet of homogeneous material for which the refractive index is estimated. A beam is propagated through this model in a forward computation. The output is compared with the actual observations for the package and an update computed for the refractive indices. The loop is repeated until convergence. The approach can be modified for a reflection system or to include estimation of absorption.

  14. High density packaging technology ultra thin package & new tab package

    NASA Astrophysics Data System (ADS)

    Nakagawa, Osamu; Shimamoto, Haruo; Ueda, Tetsuya; Shimomura, Kou; Hata, Tsutomu; Tachikawa, Toru; Fukushima, Jiro; Banjo, Toshinobu; Yamamoto, Isamu

    1989-09-01

    As electronic devices become more highly integrated, the demand for small, high pin count packages has been increasing. We have developed two new types of IC packages in response to this demand. One is an ultra thin small outline package (TSOP) which has been reduced in size from the standard SOP and the other, which uses Tape Automated Bonding (TAB) technology, is a super thin, high pin count TAB in cap (T.I.C.) package. In this paper, we present these packages and their features along with the technologies used to improve package reliability and TAB. Thin packages are vulnerable to high humidity exposure, especially after heat shock.1 The following items were therefore investigated in order to improve humidity resistance: (1) The molding compound thermal stress, (2) Water absorption into the molding compound and its effect on package cracking during solder dipping, (3) Chip attach pad area and its affect on package cracking, (4) Adhesion between molding resin and chip attach pad and its affect on humidity resistance. With the improvements made as a result of these investigations, the reliability of the new thin packages is similar to that of the standard thicker plastic packages.

  15. Science packages

    NASA Astrophysics Data System (ADS)

    1997-01-01

    Primary science teachers in Scotland have a new updating method at their disposal with the launch of a package of CDi (Compact Discs Interactive) materials developed by the BBC and the Scottish Office. These were a response to the claim that many primary teachers felt they had been inadequately trained in science and lacked the confidence to teach it properly. Consequently they felt the need for more in-service training to equip them with the personal understanding required. The pack contains five disks and a printed user's guide divided up as follows: disk 1 Investigations; disk 2 Developing understanding; disks 3,4,5 Primary Science staff development videos. It was produced by the Scottish Interactive Technology Centre (Moray House Institute) and is available from BBC Education at £149.99 including VAT. Free Internet distribution of science education materials has also begun as part of the Global Schoolhouse (GSH) scheme. The US National Science Teachers' Association (NSTA) and Microsoft Corporation are making available field-tested comprehensive curriculum material including 'Micro-units' on more than 80 topics in biology, chemistry, earth and space science and physics. The latter are the work of the Scope, Sequence and Coordination of High School Science project, which can be found at http://www.gsh.org/NSTA_SSandC/. More information on NSTA can be obtained from its Web site at http://www.nsta.org.

  16. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  17. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  18. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  19. High level language memory management on parallel architectures

    SciTech Connect

    Lebrun, P.; Kreymer, A.

    1989-05-01

    HEP memory management packages such as YBOS and ZEBRA have been implemented and are currently running on a variety of mainframe computers. These packages were originally designed to run on single CPU engines. Implementation of these packages on parallel machines, loosely or tightly coupled architectures is discussed. ZEBRA (CERN package) on ACP (Fermilab) is presented in detail. Design of memory management system for the new generation of ACP systems or similar parallel architectures are presented. The future of packages such as ZEBRA is not only linked to system architecture, but also to languages issues. We briefly mention penalties in using F77 with respect to other increasingly popular languages in HEP, such as C, on parallel systems. 9 refs.

  20. Packaging for logistical support

    NASA Astrophysics Data System (ADS)

    Twede, Diana; Hughes, Harold

    Logistical packaging is conducted to furnish protection, utility, and communication for elements of a logistical system. Once the functional requirements of space logistical support packaging have been identified, decision-makers have a reasonable basis on which to compare package alternatives. Flexible packages may be found, for example, to provide adequate protection and superior utility to that of rigid packages requiring greater storage and postuse waste volumes.

  1. Linked-View Parallel Coordinate Plot Renderer

    Energy Science and Technology Software Center (ESTSC)

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  2. Packaging for Food Service

    NASA Technical Reports Server (NTRS)

    Stilwell, E. J.

    1985-01-01

    Most of the key areas of concern in packaging the three principle food forms for the space station were covered. It can be generally concluded that there are no significant voids in packaging materials availability or in current packaging technology. However, it must also be concluded that the process by which packaging decisions are made for the space station feeding program will be very synergistic. Packaging selection will depend heavily on the preparation mechanics, the preferred presentation and the achievable disposal systems. It will be important that packaging be considered as an integral part of each decision as these systems are developed.

  3. Waste Package Lifting Calculation

    SciTech Connect

    H. Marr

    2000-05-11

    The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculation includes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation.

  4. CH Packaging Operations Manual

    SciTech Connect

    Washington TRU Solutions LLC

    2005-06-13

    This procedure provides instructions for assembling the CH Packaging Drum payload assembly, Standard Waste Box (SWB) assembly, Abnormal Operations and ICV and OCV Preshipment Leakage Rate Tests on the packaging seals, using a nondestructive Helium (He) Leak Test.

  5. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele; Antonini, David

    2008-01-01

    This viewgraph presentation describes a comparative packaging study for use on long duration space missions. The topics include: 1) Purpose; 2) Deliverables; 3) Food Sample Selection; 4) Experimental Design Matrix; 5) Permeation Rate Comparison; and 6) Packaging Material Information.

  6. ATLAS software packaging

    NASA Astrophysics Data System (ADS)

    Rybkin, Grigory

    2012-12-01

    Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software configuration and build tool, and consists of shell and Python scripts. The packaging unit used is CMT project. Each CMT project is packaged as several packages—platform dependent (one per platform available), source code excluding header files, other platform independent files, documentation, and debug information packages (the last two being built optionally). Packaging can be done recursively to package all the dependencies. The whole set of packages for one software release, distribution kit, also includes configuration packages and contains some 120 packages for one platform. Also packaged are physics analysis projects (currently 6) used by particular physics groups on top of the full release. The tools provide an installation test for the full distribution kit. Packaging is done in two formats for use with the Pacman and RPM package managers. The tools are functional on the platforms supported by ATLAS—GNU/Linux and Mac OS X. The packaged software is used for software deployment on all ATLAS computing resources from the detector and trigger computing farms, collaboration laboratories computing centres, grid sites, to physicist laptops, and CERN VMFS and covers the use cases of running all applications as well as of software development.

  7. Modular avionics packaging standardization

    NASA Astrophysics Data System (ADS)

    Austin, M.; McNichols, J. K.

    The Modular Avionics Packaging (MAP) Program for packaging future military avionics systems with the objective of improving reliability, maintainability, and supportability, and reducing equipment life cycle costs is addressed. The basic MAP packaging concepts called the Standard Avionics Module, the Standard Enclosure, and the Integrated Rack are summarized, and the benefits of modular avionics packaging, including low risk design, technology independence with common functions, improved maintainability and life cycle costs are discussed. Progress made in MAP is briefly reviewed.

  8. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  9. Packaging of electronic modules

    NASA Technical Reports Server (NTRS)

    Katzin, L.

    1966-01-01

    Study of design approaches that are taken toward optimizing the packaging of electronic modules with respect to size, shape, component orientation, interconnections, and structural support. The study does not present a solution to specific packaging problems, but rather the factors to be considered to achieve optimum packaging designs.

  10. Trends in Food Packaging.

    ERIC Educational Resources Information Center

    Ott, Dana B.

    1988-01-01

    This article discusses developments in food packaging, processing, and preservation techniques in terms of packaging materials, technologies, consumer benefits, and current and potential food product applications. Covers implications due to consumer life-style changes, cost-effectiveness of packaging materials, and the ecological impact of…

  11. DMA Modulus as a Screening Parameter for Compatibility of Polymeric Containment Materials with Various Solutions for use in Space Shuttle Microgravity Protein Crystal Growth (PCG) Experiments

    NASA Technical Reports Server (NTRS)

    Wingard, Charles Doug; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Protein crystals are grown in microgravity experiments inside the Space Shuttle during orbit. Such crystals are basically grown in a five-component system containing a salt, buffer, polymer, organic and water. During these experiments, a number of different polymeric containment materials must be compatible with up to hundreds of different PCG solutions in various concentrations for durations up to 180 days. When such compatibility experiments are performed at NASA/MSFC (Marshall Space Flight Center) simultaneously on containment material samples immersed in various solutions in vials, the samples are rather small out of necessity. DMA4 modulus was often used as the primary screening parameter for such small samples as a pass/fail criterion for incompatibility issues. In particular, the TA Instruments DMA 2980 film tension clamp was used to test rubber O-rings as small in I.D. as 0.091 in. by cutting through the cross-section at one place, then clamping the stretched linear cord stock at each end. The film tension clamp was also used to successfully test short length samples of medical/surgical grade tubing with an O.D. of 0.125 in.

  12. Parallel execution and scriptability in micromagnetic simulations

    NASA Astrophysics Data System (ADS)

    Fischbacher, Thomas; Franchin, Matteo; Bordignon, Giuliano; Knittel, Andreas; Fangohr, Hans

    2009-04-01

    We demonstrate the feasibility of an "encapsulated parallelism" approach toward micromagnetic simulations that combines offering a high degree of flexibility to the user with the efficient utilization of parallel computing resources. While parallelization is obviously desirable to address the high numerical effort required for realistic micromagnetic simulations through utilizing now widely available multiprocessor systems (including desktop multicore CPUs and computing clusters), conventional approaches toward parallelization impose strong restrictions on the structure of programs: numerical operations have to be executed across all processors in a synchronized fashion. This means that from the user's perspective, either the structure of the entire simulation is rigidly defined from the beginning and cannot be adjusted easily, or making modifications to the computation sequence requires advanced knowledge in parallel programming. We explain how this dilemma is resolved in the NMAG simulation package in such a way that the user can utilize without any additional effort on his side both the computational power of multiple CPUs and the flexibility to tailor execution sequences for specific problems: simulation scripts written for single-processor machines can just as well be executed on parallel machines and behave in precisely the same way, up to increased speed. We provide a simple instructive magnetic resonance simulation example that demonstrates utilizing both custom execution sequences and parallelism at the same time. Furthermore, we show that this strategy of encapsulating parallelism even allows to benefit from speed gains through parallel execution in simulations controlled by interactive commands given at a command line interface.

  13. The bacteriophage DNA packaging machine.

    PubMed

    Feiss, Michael; Rao, Venigalla B

    2012-01-01

    Large dsDNA bacteriophages and herpesviruses encode a powerful ATP-driven DNA-translocating machine that encapsidates a viral genome into a preformed capsid shell or prohead. The key components of the packaging machine are the packaging enzyme (terminase, motor) and the portal protein that forms the unique DNA entrance vertex of prohead. The terminase complex, comprised of a recognition subunit (small terminase) and an endonuclease/translocase subunit (large terminase), cuts viral genome concatemers. The terminase-viral DNA complex docks on the portal vertex, assembling a motor complex containing five large terminase subunits. The pentameric motor processively translocates DNA until the head shell is full with one viral genome. The motor cuts the DNA again and dissociates from the full head, allowing head-finishing proteins to assemble on the portal, sealing the portal, and constructing a platform for tail attachment. A body of evidence from molecular genetics and biochemical, structural, and biophysical approaches suggests that ATP hydrolysis-driven conformational changes in the packaging motor (large terminase) power DNA motion. Various parts of the motor subunit, such as the ATPase, arginine finger, transmission domain, hinge, and DNA groove, work in concert to translocate about 2 bp of DNA per ATP hydrolyzed. Powerful single-molecule approaches are providing precise delineation of steps during each translocation event in a motor that has a speed as high as a millisecond/step. The phage packaging machine has emerged as an excellent model for understanding the molecular machines, given the mechanistic parallels between terminases, helicases, and numerous motor proteins. PMID:22297528

  14. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  15. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  16. Large area LED package

    NASA Astrophysics Data System (ADS)

    Goullon, L.; Jordan, R.; Braun, T.; Bauer, J.; Becker, F.; Hutter, M.; Schneider-Ramelow, M.; Lang, K.-D.

    2015-03-01

    Solid state lighting using LED-dies is a rapidly growing market. LED-dies with the needed increasing luminous flux per chip area produce a lot of heat. Therefore an appropriate thermal management is required for general lighting with LEDdies. One way to avoid overheating and shorter lifetime is the use of many small LED-dies on a large area heat sink (down to 70 μm edge length), so that heat can spread into a large area while at the same time light also appears on a larger area. The handling with such small LED-dies is very difficult because they are too small to be picked with common equipment. Therefore a new concept called collective transfer bonding using a temporary carrier chip was developed. A further benefit of this new technology is the high precision assembly as well as the plane parallel assembly of the LED-dies which is necessary for wire bonding. It has been shown that hundred functional LED-dies were transferred and soldered at the same time. After the assembly a cost effective established PCB-technology was applied to produce a large-area light source consisting of many small LED-dies and electrically connected on a PCB-substrate. The top contacts of the LED-dies were realized by laminating an adhesive copper sheet followed by LDI structuring as known from PCB-via-technology. This assembly can be completed by adding converting and light forming optical elements. In summary two technologies based on standard SMD and PCB technology have been developed for panel level LED packaging up to 610x 457 mm2 area size.

  17. Using the scalable nonlinear equations solvers package

    SciTech Connect

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  18. Packaged die heater

    SciTech Connect

    Spielberger, Richard; Ohme, Bruce Walker; Jensen, Ronald J.

    2011-06-21

    A heater for heating packaged die for burn-in and heat testing is described. The heater may be a ceramic-type heater with a metal filament. The heater may be incorporated into the integrated circuit package as an additional ceramic layer of the package, or may be an external heater placed in contact with the package to heat the die. Many different types of integrated circuit packages may be accommodated. The method provides increased energy efficiency for heating the die while reducing temperature stresses on testing equipment. The method allows the use of multiple heaters to heat die to different temperatures. Faulty die may be heated to weaken die attach material to facilitate removal of the die. The heater filament or a separate temperature thermistor located in the package may be used to accurately measure die temperature.

  19. Smart packaging for photonics

    SciTech Connect

    Smith, J.H.; Carson, R.F.; Sullivan, C.T.; McClellan, G.; Palmer, D.W.

    1997-09-01

    Unlike silicon microelectronics, photonics packaging has proven to be low yield and expensive. One approach to make photonics packaging practical for low cost applications is the use of {open_quotes}smart{close_quotes} packages. {open_quotes}Smart{close_quotes} in this context means the ability of the package to actuate a mechanical change based on either a measurement taken by the package itself or by an input signal based on an external measurement. One avenue of smart photonics packaging, the use of polysilicon micromechanical devices integrated with photonic waveguides, was investigated in this research (LDRD 3505.340). The integration of optical components with polysilicon surface micromechanical actuation mechanisms shows significant promise for signal switching, fiber alignment, and optical sensing applications. The optical and stress properties of the oxides and nitrides considered for optical waveguides and how they are integrated with micromechanical devices were investigated.

  20. Stratimikos Wrapper Package

    Energy Science and Technology Software Center (ESTSC)

    2006-08-22

    Stratimikos is a small package of C++ wrappers for linear solver and preconditioning functionality exposed through Thyra interfaces. This package makes is possible to aggregate all of the general linear solver capability from the packages Amesos, AztecOO, Belos, lfpack, ML and others into a simple to use, parameter-list driven, interface to linear solvers. This initial version of Stratimikos contains just one utility class for building linear solvrs and preconditioners out of Epetra-based linear operators.

  1. The ZOOM minimization package

    SciTech Connect

    Fischler, Mark S.; Sachs, D.; /Fermilab

    2004-11-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete.

  2. Construction of a lncRNA-PCG bipartite network and identification of cancer-related lncRNAs: a case study in prostate cancer.

    PubMed

    Liu, Yongjing; Zhang, Rui; Qiu, Fujun; Li, Kening; Zhou, Yuanshuai; Shang, Desi; Xu, Yan

    2015-02-01

    LncRNAs are involved in a wide range of biological processes, such as chromatin remodeling, mRNA splicing, mRNA editing and translation. They can either upregulate or downregulate gene expression, and play key roles in the progression of various human cancers. However, the functional mechanisms of most lncRNAs still remain unknown at present. This paper aims to provide space for the understanding of lncRNAs by proposing a new method to obtain protein-coding genes (PCGs) regulated by lncRNAs, thus identifying candidate cancer-related lncRNAs using bioinformatics approaches. This study presents a method based on sample correlation, which is applied to the expression profiles of lncRNAs and PCGs in prostate cancer in combination with protein interaction data to build a lncRNA-PCG bipartite network. Candidate cancer-related lncRNAs were extracted from the bipartite network by using a random walk. 14 prostate cancer-related lncRNAs were acquired from the LncRNADisease database and MNDR, of which 6 lncRNAs were present in our network. As one of the seed nodes, ENSG00000234741 achieved the highest score among them. The other two cancer-related lncRNAs (ENSG00000225937 and ENSG00000236830) were ranked within the top 30. In addition, the top candidate lncRNA ENSG00000261777 shares an intron with DDX19, and interacts with IGF2 P1, indicating its involvement in prostate cancer. In this paper, we described a new method for predicting candidate lncRNA targets, and obtained candidate therapeutic targets using this method. We hope that this study will bring a new perspective in future lncRNA studies. PMID:25385343

  3. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  4. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  5. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  6. WASTE PACKAGE TRANSPORTER DESIGN

    SciTech Connect

    D.C. Weddle; R. Novotny; J. Cron

    1998-09-23

    The purpose of this Design Analysis is to develop preliminary design of the waste package transporter used for waste package (WP) transport and related functions in the subsurface repository. This analysis refines the conceptual design that was started in Phase I of the Viability Assessment. This analysis supports the development of a reliable emplacement concept and a retrieval concept for license application design. The scope of this analysis includes the following activities: (1) Assess features of the transporter design and evaluate alternative design solutions for mechanical components. (2) Develop mechanical equipment details for the transporter. (3) Prepare a preliminary structural evaluation for the transporter. (4) Identify and recommend the equipment design for waste package transport and related functions. (5) Investigate transport equipment interface tolerances. This analysis supports the development of the waste package transporter for the transport, emplacement, and retrieval of packaged radioactive waste forms in the subsurface repository. Once the waste containers are closed and accepted, the packaged radioactive waste forms are termed waste packages (WP). This terminology was finalized as this analysis neared completion; therefore, the term disposal container is used in several references (i.e., the System Description Document (SDD)) (Ref. 5.6). In this analysis and the applicable reference documents, the term ''disposal container'' is synonymous with ''waste package''.

  7. The West: Curriculum Package.

    ERIC Educational Resources Information Center

    Public Broadcasting Service, Alexandria, VA.

    This document consists of the printed components only of a PBS curriculum package intended to be used with the 9-videotape PBS documentary series entitled "The West." The complete curriculum package includes a teacher's guide, lesson plans, a student guide, audio tapes, a video index, and promotional poster. The teacher's guide and lesson plans…

  8. NRF TRIGA packaging

    SciTech Connect

    Clements, M.D.

    1995-11-01

    Training Reactor Isotopes, General Atomics (TRIGA{reg_sign}) Reactors are in use at four US Department of Energy (DOE) complex facilities and at least 23 university, commercial, or government facilities. The development of the Neutron Radiography Facility (NRF) TRIGA packaging system began in October 1993. The Hanford Site NRF is being shut down and requires an operationally user-friendly transportation and storage packaging system for removal of the TRIGA fuel elements. The NRF TRIGA packaging system is designed to remotely remove the fuel from the reactor and transport the fuel to interim storage (up to 50 years) on the Hanford Site. The packaging system consists of a cask and an overpack. The overpack is used only for transport and is not necessary for storage. Based upon the cask`s small size and light weight, small TRIGA reactors will find it versatile for numerous refueling and fuel storage needs. The NRF TRIGA packaging design also provides the basis for developing a certifiable and economical packaging system for other TRIGA reactor facilities. The small size of the NRF TRIGA cask also accommodates placing the cask into a larger certified packaging for offsite transport. The Westinghouse Hanford Company NRF TRIGA packaging, as described herein can serve other DOE sites for their onsite use, and the design can be adapted to serve university reactor facilities, handling a variety of fuel payloads.

  9. Modular electronics packaging system

    NASA Technical Reports Server (NTRS)

    Hunter, Don J. (Inventor)

    2001-01-01

    A modular electronics packaging system includes multiple packaging slices that are mounted horizontally to a base structure. The slices interlock to provide added structural support. Each packaging slice includes a rigid and thermally conductive housing having four side walls that together form a cavity to house an electronic circuit. The chamber is enclosed on one end by an end wall, or web, that isolates the electronic circuit from a circuit in an adjacent packaging slice. The web also provides a thermal path between the electronic circuit and the base structure. Each slice also includes a mounting bracket that connects the packaging slice to the base structure. Four guide pins protrude from the slice into four corresponding receptacles in an adjacent slice. A locking element, such as a set screw, protrudes into each receptacle and interlocks with the corresponding guide pin. A conduit is formed in the slice to allow electrical connection to the electronic circuit.

  10. Parallel CFD design on network-based computer

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advanced computational fluid dynamics codes, which can be computationally expensive on mainframe supercomputers. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computing environment utilizing a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package is applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  11. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; Holst, Terry L. (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  12. Packaging Concerns/Techniques for Large Devices

    NASA Technical Reports Server (NTRS)

    Sampson, Michael J.

    2009-01-01

    This slide presentation reviews packaging challenges and options for electronic parts. The presentation includes information about non-hermetic packages, space challenges for packaging and complex package variations.

  13. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  14. Seawater Chemistry Package

    Energy Science and Technology Software Center (ESTSC)

    2005-11-23

    SeaChem Seawater Chemistry package provides routines to calculate pH, carbonate chemistry, density, and other quantities for seawater, based on the latest community standards. The chemistry is adapted from fortran routines provided by the OCMIP3/NOCES project, details of which are available at http://www.ipsl.jussieu.fr/OCMIP/. The SeaChem package can generate Fortran subroutines as well as Python wrappers for those routines. Thus the same code can be used by Python or Fortran analysis packages and Fortran ocean models alike.

  15. TSF Interface Package

    SciTech Connect

    2004-03-01

    A collection of packages of classes for interfacing to sparse and dense matrices, vectors and graphs, and to linear operators. TSF (via TSFCore, TSFCoreUtils and TSFExtended) provides the application programmer interface to any number of solvers, linear algebra libraries and preconditioner packages, providing also a sophisticated technique for combining multiple packages to solve a single problem. TSF provides a collection of abstract base classes that define the interfaces to abstract vector, matrix and linear soerator objects. By using abstract interfaces, users of TSF are not limiting themselves to any one concrete library and can in fact easily combine multiple libraries to solve a single problem.

  16. Optoelectronic packaging: A review

    SciTech Connect

    Carson, R.F.

    1993-09-01

    Optoelectronics and photonics hold great potential for high data-rate communication and computing. Wide using in computing applications was limited first by device technologies and now suffers due to the need for high-precision, mass-produced packaging. The use of phontons as a medium of communication and control implies a unique set of packaging constraints that was not present in traditional telecommunications applications. The state-of-the-art in optoelectronic packaging is now driven by microelectric techniques that have potential for low cost and high volume manufacturing.

  17. Distribution of H2O and CO2 in the inner coma of 67P/CG as observed by VIRTIS-M onboard Rosetta

    NASA Astrophysics Data System (ADS)

    Capaccioni, F.

    2015-10-01

    VIRTIS (Visible, Infrared and Thermal Imaging Spectrometers) is a dual channel spectrometer; VIRTIS-M (M for Mapper) is a hyper-spectral imager covering a wide spectral range with two detectors: a CCD (VIS) ranging from 0.25 through 1.0 μm and an HgCdTe detector (IR) covering the 1.0 through 5.1 μm region. VIRTIS-M uses a slit and a scan mirror to generate images with spatial resolution of 250 μrad over a FOV of 64 mrad. The second channel is VIRTIS-H (H for High resolution), a point spectrometer with high spectral resolution (λ/Δλ=3000@3 μm) in the range 2-5 μm [1].The VIRTIS instrument has been used to investigate the molecular composition of the coma of 67P/CG by observing resonant fluorescent excitation in the 2 to 5 μm spectral region. The spectrum consists of emission bands superimposed on a background continuum. The strongest features are the bands of H2O at 2.7 μm and the CO2 band at 4.27 μm [1]. The high spectral resolution of VIRTIS-H obtains a detailed description of the fluorescent bands, while the mapping capability of VIRTIS-M extends the coverage in the spatial dimension to map and monitor the abundance of water and carbon dioxide in space and time. We have already reported [2,3,4] some preliminary observations by VIRTIS of H2O and CO2 in the coma. In the present work we perform a systematic mapping of the distribution and variability of these molecules using VIRTIS-M measurements of their band areas. All the spectra were carefully selected to avoid contamination due to nucleus radiance. A median filter is applied on the spatial dimensions of each data cube to minimise the pixel-to-pixel residual variability. This is at the expense of some reduction in the spatial resolution, which is still in the order of few tens of metres and thus adequate for the study of the spatial distribution of the volatiles. Typical spectra are shown in Figure 1

  18. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  19. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  20. A survey of packages for large linear systems

    SciTech Connect

    Wu, Kesheng; Milne, Brent

    2000-02-11

    This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to very large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user

  1. Packaging for Posterity.

    ERIC Educational Resources Information Center

    Sias, Jim

    1990-01-01

    A project in which students designed environmentally responsible food packaging is described. The problem definition; research on topics such as waste paper, plastic, metal, glass, incineration, recycling, and consumer preferences; and the presentation design are provided. (KR)

  2. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  3. Waste Package Program

    SciTech Connect

    Culbreth, W.; Ladkany, S.

    1991-07-21

    This was a progress report on the research program of waste packages at the University of Nevada, Las Vegas. The report has the overviews of what the program has done from January 1991 to June 1991, such as task assignments for personnel, equipment acquisitions, and staff meetings and travels on behalf of the project. Also, included was an abstract on the structural analysis of the waste package container design. (MB)

  4. Battery packaging - Technology review

    SciTech Connect

    Maiser, Eric

    2014-06-16

    This paper gives a brief overview of battery packaging concepts, their specific advantages and drawbacks, as well as the importance of packaging for performance and cost. Production processes, scaling and automation are discussed in detail to reveal opportunities for cost reduction. Module standardization as an additional path to drive down cost is introduced. A comparison to electronics and photovoltaics production shows 'lessons learned' in those related industries and how they can accelerate learning curves in battery production.

  5. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2002-03-04

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT Shipping Package, and directly related components. This document complies with the minimum requirements as specified in TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event there is a conflict between this document and the SARP or C of C, the SARP and/or C of C shall govern. C of Cs state: ''each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application.'' They further state: ''each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SAR P charges the WIPP Management and Operation (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with 10 CFR 71.11. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document details the instructions to be followed to operate, maintain, and test the TRUPACT-II and HalfPACT packaging. The intent of these instructions is to standardize these operations. All users will follow these instructions or equivalent instructions that assure operations are safe and meet the requirements of the SARPs.

  6. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2003-04-30

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: ''each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application.'' They further state: ''each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SARP charges the WIPP management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with 10 CFR 71.11. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document provides the instructions to be followed to operate, maintain, and test the TRUPACT-II and HalfPACT packaging. The intent of these instructions is to standardize operations. All users will follow these instructions or equivalent instructions that assure operations are safe and meet the requirements of the SARPs.

  7. The ENSDF Java Package

    SciTech Connect

    Sonzogni, A.A.

    2005-05-24

    A package of computer codes has been developed to process and display nuclear structure and decay data stored in the ENSDF (Evaluated Nuclear Structure Data File) library. The codes were written in an object-oriented fashion using the java language. This allows for an easy implementation across multiple platforms as well as deployment on web pages. The structure of the different java classes that make up the package is discussed as well as several different implementations.

  8. Battery packaging - Technology review

    NASA Astrophysics Data System (ADS)

    Maiser, Eric

    2014-06-01

    This paper gives a brief overview of battery packaging concepts, their specific advantages and drawbacks, as well as the importance of packaging for performance and cost. Production processes, scaling and automation are discussed in detail to reveal opportunities for cost reduction. Module standardization as an additional path to drive down cost is introduced. A comparison to electronics and photovoltaics production shows "lessons learned" in those related industries and how they can accelerate learning curves in battery production.

  9. Comparative Packaging Study

    NASA Technical Reports Server (NTRS)

    Perchonok, Michele H.; Oziomek, Thomas V.

    2009-01-01

    Future long duration manned space flights beyond low earth orbit will require the food system to remain safe, acceptable and nutritious. Development of high barrier food packaging will enable this requirement by preventing the ingress and egress of gases and moisture. New high barrier food packaging materials have been identified through a trade study. Practical application of this packaging material within a shelf life test will allow for better determination of whether this material will allow the food system to meet given requirements after the package has undergone processing. The reason to conduct shelf life testing, using a variety of packaging materials, stems from the need to preserve food used for mission durations of several years. Chemical reactions that take place during longer durations may decrease food quality to a point where crew physical or psychological well-being is compromised. This can result in a reduction or loss of mission success. The rate of chemical reactions, including oxidative rancidity and staling, can be controlled by limiting the reactants, reducing the amount of energy available to drive the reaction, and minimizing the amount of water available. Water not only acts as a media for microbial growth, but also as a reactant and means by which two reactants may come into contact with each other. The objective of this study is to evaluate three packaging materials for potential use in long duration space exploration missions.

  10. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  11. Eclipse Parallel Tools Platform

    SciTech Connect

    Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

  12. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  13. Some parallel algorithms on the four processor Cray X-MP4 supercomputer

    SciTech Connect

    Kincaid, D.R.; Oppe, T.C.

    1988-05-01

    Three numerical studies of parallel algorithms on a four processor Cray X-MP4 supercomputer are presented. These numerical experiments involve the following: a parallel version of ITPACKV 2C, a package for solving large sparse linear systems, a parallel version of the conjugate gradient method with line Jacobi preconditioning, and several parallel algorithms for computing the LU-factorization of dense matrices. 27 refs., 4 tabs.

  14. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  15. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2009-06-01

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  16. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2008-09-11

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the pplication." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  17. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2008-01-12

    The purpose of this program guidance document is to provide the technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package (also known as the "RH-TRU 72-B cask") and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the C of C shall govern. The C of C states: "...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." It further states: "...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) Contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8, "Deliberate Misconduct." Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the U.S. Department of Energy (DOE) Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, "Packaging and Transportation of Radioactive Material," certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21, "Reporting of Defects and Noncompliance," regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a

  18. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2006-11-07

    The purpose of this program guidance document is to provide the technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the C of C shall govern. The C of C states: "...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." It further states: "...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) Contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with 10 Code of Federal Regulations (CFR) §71.8, "Deliberate Misconduct." Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the U.S. Department of Energy (DOE) Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. In accordance with 10 CFR Part 71, "Packaging and Transportation of Radioactive Material," certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21, "Reporting of Defects and Noncompliance," regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to

  19. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  20. Packaging signals in alphaviruses.

    PubMed

    Frolova, E; Frolov, I; Schlesinger, S

    1997-01-01

    Alphaviruses synthesize large amounts of both genomic and subgenomic RNA in infected cells, but usually only the genomic RNA is packaged. This implies the existence of an encapsidation or packaging signal which would be responsible for selectivity. Previously, we had identified a region of the Sindbis virus genome that interacts specifically with the viral capsid protein. This 132-nucleotide (nt) fragment lies within the coding region of the nsP1 gene (nt 945 to 1076). We proposed that the 132-mer is important for capsid recognition and initiates the formation of the viral nucleocapsid. To study the encapsidation of Sindbis virus RNAs in infected cells, we designed a new assay that uses the self-replicating Sindbis virus genomes (replicons) which lack the viral structural protein genes and contain heterologous sequences under the control of the subgenomic RNA promoter. These replicons can be packaged into viral particles by using defective helper RNAs that contain the structural protein genes (P. Bredenbeek, I. Frolov, C. M. Rice, and S. Schlesinger, J. Virol. 67:6439-6446, 1993). Insertion of the 132-mer into the subgenomic RNA significantly increased the packaging of this RNA into viral particles. We have used this assay and defective helpers that contain the structural protein genes of Ross River virus (RRV) to investigate the location of the encapsidation signal in the RRV genome. Our results show that there are several fragments that could act as packaging signals. They are all located in a different region of the genome than the signal for the Sindbis virus genome. For RRV, the strongest packaging signal lies between nt 2761 and 3062 in the nsP2 gene. This is the same region that was proposed to contain the packaging signal for Semliki Forest virus genomic RNA. PMID:8985344

  1. CH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions LLC

    2005-02-28

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.

  2. Food Packaging Materials

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The photos show a few of the food products packaged in Alure, a metallized plastic material developed and manufactured by St. Regis Paper Company's Flexible Packaging Division, Dallas, Texas. The material incorporates a metallized film originally developed for space applications. Among the suppliers of the film to St. Regis is King-Seeley Thermos Company, Winchester, Ma'ssachusetts. Initially used by NASA as a signal-bouncing reflective coating for the Echo 1 communications satellite, the film was developed by a company later absorbed by King-Seeley. The metallized film was also used as insulating material for components of a number of other spacecraft. St. Regis developed Alure to meet a multiple packaging material need: good eye appeal, product protection for long periods and the ability to be used successfully on a wide variety of food packaging equipment. When the cost of aluminum foil skyrocketed, packagers sought substitute metallized materials but experiments with a number of them uncovered problems; some were too expensive, some did not adequately protect the product, some were difficult for the machinery to handle. Alure offers a solution. St. Regis created Alure by sandwiching the metallized film between layers of plastics. The resulting laminated metallized material has the superior eye appeal of foil but is less expensive and more easily machined. Alure effectively blocks out light, moisture and oxygen and therefore gives the packaged food long shelf life. A major packaging firm conducted its own tests of the material and confirmed the advantages of machinability and shelf life, adding that it runs faster on machines than materials used in the past and it decreases product waste; the net effect is increased productivity.

  3. Food packages for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Fohey, M. F.; Sauer, R. L.; Westover, J. B.; Rockafeller, E. F.

    1978-01-01

    The paper reviews food packaging techniques used in space flight missions and describes the system developed for the Space Shuttle. Attention is directed to bite-size food cubes used in Gemini, Gemini rehydratable food packages, Apollo spoon-bowl rehydratable packages, thermostabilized flex pouch for Apollo, tear-top commercial food cans used in Skylab, polyethylene beverage containers, Skylab rehydratable food package, Space Shuttle food package configuration, duck-bill septum rehydration device, and a drinking/dispensing nozzle for Space Shuttle liquids. Constraints and testing of packaging is considered, a comparison of food package materials is presented, and typical Shuttle foods and beverages are listed.

  4. Detecting small holes in packages

    DOEpatents

    Kronberg, James W.; Cadieux, James R.

    1996-01-01

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package.

  5. Detecting small holes in packages

    DOEpatents

    Kronberg, J.W.; Cadieux, J.R.

    1996-03-19

    A package containing a tracer gas, and a method for determining the presence of a hole in the package by sensing the presence of the gas outside the package are disclosed. The preferred tracer gas, especially for food packaging, is sulfur hexafluoride. A quantity of the gas is added to the package and the package is closed. The concentration of the gas in the atmosphere outside the package is measured and compared to a predetermined value of the concentration of the gas in the absence of the package. A measured concentration greater than the predetermined value indicates the presence of a hole in the package. Measuring may be done in a chamber having a lower pressure than that in the package. 3 figs.

  6. Modular thermoelectric cell is easily packaged in various arrays

    NASA Technical Reports Server (NTRS)

    Epstein, J.

    1965-01-01

    Modular thermoelectric cells are easily packaged in various arrays to form power supplies and have desirable voltage and current output characteristics. The cells employ two pairs of thermoelectric elements, each pair being connected in parallel between two sets of aluminum plates. They can be used as solar energy conversion devices.

  7. Compact integrated piezoelectric vibration control package

    NASA Astrophysics Data System (ADS)

    Spangler, Ronald L., Jr.; Russo, Farla M.; Palombo, Daniel A.

    1997-06-01

    Using recent advances in small, surface-mount electronics, coupled with proprietary packaging techniques, ACX has developed the SmartPackTM. The design and realization of this self-contained, active piezoelectric control device are described in this paper. The SmartPack uses a local control architecture, consisting of two parallel, analog, positive position feedback (PPF) filters, along with nearly collocated piezo strain sensors and actuators, to control multiple structural vibration modes. A key issue is the management of waste heat from the power electronics required to drive the piezo actuators. This issue is addressed through thermal/electrical modeling of the packaged amplifier. The effectiveness of the device is demonstrated through multi-mode active damping on a 24 inch square plate.

  8. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2006-04-25

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package TransporterModel II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant| (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations(CFR) §71.8. Any time a user suspects or has indications that the conditions ofapproval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  9. CH Packaging Program Guidance

    SciTech Connect

    None, None

    2007-12-13

    The purpose of this document is to provide the technical requirements for preparation for use, operation, inspection, and maintenance of a Transuranic Package Transporter Model II (TRUPACT-II), a HalfPACT shipping package, and directly related components. This document complies with the minimum requirements as specified in the TRUPACT-II Safety Analysis Report for Packaging (SARP), HalfPACT SARP, and U.S. Nuclear Regulatory Commission (NRC) Certificates of Compliance (C of C) 9218 and 9279, respectively. In the event of a conflict between this document and the SARP or C of C, the C of C shall govern. The C of Cs state: "each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, Operating Procedures, of the application." They further state: "each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, Acceptance Tests and Maintenance Program of the Application." Chapter 9.0 of the SARP charges the U.S. Department of Energy (DOE) or the Waste Isolation Pilot Plant (WIPP) management and operating (M&O) contractor with assuring packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC-approved, users need to be familiar with Title 10 Code of Federal Regulations (CFR) §71.8. Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. The CBFO will evaluate the issue and notify the NRC if required.In accordance with 10 CFR Part 71, certificate holders, packaging users, and contractors or subcontractors who use, design, fabricate, test, maintain, or modify the packaging shall post copies of (1) 10 CFR Part 21 regulations, (2) Section 206 of the Energy Reorganization Act of 1974, and (3) NRC Form 3, Notice to Employees. These documents must be posted in a conspicuous location where the activities subject to these regulations are

  10. TSF Interface Package

    Energy Science and Technology Software Center (ESTSC)

    2004-03-01

    A collection of packages of classes for interfacing to sparse and dense matrices, vectors and graphs, and to linear operators. TSF (via TSFCore, TSFCoreUtils and TSFExtended) provides the application programmer interface to any number of solvers, linear algebra libraries and preconditioner packages, providing also a sophisticated technique for combining multiple packages to solve a single problem. TSF provides a collection of abstract base classes that define the interfaces to abstract vector, matrix and linear soeratormore » objects. By using abstract interfaces, users of TSF are not limiting themselves to any one concrete library and can in fact easily combine multiple libraries to solve a single problem.« less

  11. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  12. System packager strategies

    SciTech Connect

    Hennagir, T.

    1995-03-01

    Advances in combined equipment technologies, the ability to supply fuel flexibility and new financial support structures are helping power systems packagers meet a diverse series of client and project needs. Systems packagers continue to capture orders for various size power plants around the globe. A competitive buyer`s market remains the order of the day. In cogeneration markets, clients continue to search for efficiency rather than specific output for inside-the-fence projects. Letter-perfect service remains a requisite as successful suppliers strive to meet customers` ever-changing needs for thermal and power applications.

  13. SPHINX experimenters information package

    SciTech Connect

    Zarick, T.A.

    1996-08-01

    This information package was prepared for both new and experienced users of the SPHINX (Short Pulse High Intensity Nanosecond X-radiator) flash X-Ray facility. It was compiled to help facilitate experiment design and preparation for both the experimenter(s) and the SPHINX operational staff. The major areas covered include: Recording Systems Capabilities,Recording System Cable Plant, Physical Dimensions of SPHINX and the SPHINX Test cell, SPHINX Operating Parameters and Modes, Dose Rate Map, Experiment Safety Approval Form, and a Feedback Questionnaire. This package will be updated as the SPHINX facilities and capabilities are enhanced.

  14. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  15. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  16. AN ADA NAMELIST PACKAGE

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested

  17. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  18. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  19. PARAMESH V4.1: Parallel Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; de Fainchtein, Rosalinda; Packer, Charles

    2011-06-01

    PARAMESH is a package of Fortran 90 subroutines designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain, with spatial resolution varying to satisfy the demands of the application. These sub-grid blocks form the nodes of a tree data-structure (quad-tree in 2D or oct-tree in 3D). Each grid block has a logically cartesian mesh. The package supports 1, 2 and 3D models. PARAMESH is released under the NASA-wide Open-Source software license.

  20. Package for fragile objects

    DOEpatents

    Burgeson, Duane A.

    1977-01-01

    A package for fragile objects such as radioactive fusion pellets of micron size shipped in mounted condition or unmounted condition with a frangible inner container which is supported in a second inner container which in turn is supported in a final outer container, the second inner container having recesses for supporting alternate design inner containers.

  1. Metric Education Evaluation Package.

    ERIC Educational Resources Information Center

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  2. Project Information Packages Kit.

    ERIC Educational Resources Information Center

    RMC Research Corp., Mountain View, CA.

    Presented are an overview booklet, a project selection guide, and six Project Information Packages (PIPs) for six exemplary projects serving underachieving students in grades k through 9. The overview booklet outlines the PIP projects and includes a chart of major project features. A project selection guide reviews the PIP history, PIP contents,…

  3. Printer Graphics Package

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Printer Graphics Package (PGP) is tool for making two-dimensional symbolic plots on line printer. PGP created to support development of Heads-Up Display (HUD) simulation. Standard symbols defined with HUD in mind. Available symbols include circle, triangle, quadrangle, window, line, numbers, and text. Additional symbols easily added or built up from available symbols.

  4. Electro-Microfluidic Packaging

    NASA Astrophysics Data System (ADS)

    Benavides, G. L.; Galambos, P. C.

    2002-06-01

    There are many examples of electro-microfluidic products that require cost effective packaging solutions. Industry has responded to a demand for products such as drop ejectors, chemical sensors, and biological sensors. Drop ejectors have consumer applications such as ink jet printing and scientific applications such as patterning self-assembled monolayers or ejecting picoliters of expensive analytes/reagents for chemical analysis. Drop ejectors can be used to perform chemical analysis, combinatorial chemistry, drug manufacture, drug discovery, drug delivery, and DNA sequencing. Chemical and biological micro-sensors can sniff the ambient environment for traces of dangerous materials such as explosives, toxins, or pathogens. Other biological sensors can be used to improve world health by providing timely diagnostics and applying corrective measures to the human body. Electro-microfluidic packaging can easily represent over fifty percent of the product cost and, as with Integrated Circuits (IC), the industry should evolve to standard packaging solutions. Standard packaging schemes will minimize cost and bring products to market sooner.

  5. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-11-04

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  6. Radioactive waste disposal package

    DOEpatents

    Lampe, Robert F.

    1986-01-01

    A radioactive waste disposal package comprising a canister for containing vitrified radioactive waste material and a sealed outer shell encapsulating the canister. A solid block of filler material is supported in said shell and convertible into a liquid state for flow into the space between the canister and outer shell and subsequently hardened to form a solid, impervious layer occupying such space.

  7. CH Packaging Maintenance Manual

    SciTech Connect

    Washington TRU Solutions

    2002-01-02

    This procedure provides instructions for performing inner containment vessel (ICV) and outer containment vessel (OCV) maintenance and periodic leakage rate testing on the following packaging seals and corresponding seal surfaces using a nondestructive helium (He) leak test. In addition, this procedure provides instructions for performing ICV and OCV structural pressure tests.

  8. Automatic Differentiation Package

    Energy Science and Technology Software Center (ESTSC)

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  9. Waste disposal package

    DOEpatents

    Smith, M.J.

    1985-06-19

    This is a claim for a waste disposal package including an inner or primary canister for containing hazardous and/or radioactive wastes. The primary canister is encapsulated by an outer or secondary barrier formed of a porous ceramic material to control ingress of water to the canister and the release rate of wastes upon breach on the canister. 4 figs.

  10. High Efficiency Integrated Package

    SciTech Connect

    Ibbetson, James

    2013-09-15

    Solid-state lighting based on LEDs has emerged as a superior alternative to inefficient conventional lighting, particularly incandescent. LED lighting can lead to 80 percent energy savings; can last 50,000 hours – 2-50 times longer than most bulbs; and contains no toxic lead or mercury. However, to enable mass adoption, particularly at the consumer level, the cost of LED luminaires must be reduced by an order of magnitude while achieving superior efficiency, light quality and lifetime. To become viable, energy-efficient replacement solutions must deliver system efficacies of ≥ 100 lumens per watt (LPW) with excellent color rendering (CRI > 85) at a cost that enables payback cycles of two years or less for commercial applications. This development will enable significant site energy savings as it targets commercial and retail lighting applications that are most sensitive to the lifetime operating costs with their extended operating hours per day. If costs are reduced substantially, dramatic energy savings can be realized by replacing incandescent lighting in the residential market as well. In light of these challenges, Cree proposed to develop a multi-chip integrated LED package with an output of > 1000 lumens of warm white light operating at an efficacy of at least 128 LPW with a CRI > 85. This product will serve as the light engine for replacement lamps and luminaires. At the end of the proposed program, this integrated package was to be used in a proof-of-concept lamp prototype to demonstrate the component’s viability in a common form factor. During this project Cree SBTC developed an efficient, compact warm-white LED package with an integrated remote color down-converter. Via a combination of intensive optical, electrical, and thermal optimization, a package design was obtained that met nearly all project goals. This package emitted 1295 lm under instant-on, room-temperature testing conditions, with an efficacy of 128.4 lm/W at a color temperature of ~2873

  11. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  12. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  13. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  14. Packaging design criteria for the Hanford Ecorok Packaging

    SciTech Connect

    Mercado, M.S.

    1996-01-19

    The Hanford Ecorok Packaging (HEP) will be used to ship contaminated water purification filters from K Basins to the Central Waste Complex. This packaging design criteria documents the design of the HEP, its intended use, and the transportation safety criteria it is required to meet. This information will serve as a basis for the safety analysis report for packaging.

  15. Transparent runtime parallelization of the R scripting language

    SciTech Connect

    Yoginath, Srikanth B

    2011-01-01

    Scripting languages such as R and Matlab are widely used in scientific data processing. As the data volume and the complexity of analysis tasks both grow, sequential data processing using these tools often becomes the bottleneck in scientific workflows. We describe pR, a runtime framework for automatic and transparent parallelization of the popular R language used in statistical computing. Recognizing scripting languages interpreted nature and data analysis codes use pattern, we propose several novel techniques: (1) applying parallelizing compiler technology to runtime, whole-program dependence analysis of scripting languages, (2) incremental code analysis assisted with evaluation results, and (3) runtime parallelization of file accesses. Our framework does not require any modification to either the source code or the underlying R implementation. Experimental results demonstrate that pR can exploit both task and data parallelism transparently and overall has better performance as well as scalability compared to an existing parallel R package that requires code modification.

  16. Parallel Analog-to-Digital Image Processor

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.

    1987-01-01

    Proposed integrated-circuit network of many identical units convert analog outputs of imaging arrays of x-ray or infrared detectors to digital outputs. Converter located near imaging detectors, within cryogenic detector package. Because converter output digital, lends itself well to multiplexing and to postprocessing for correction of gain and offset errors peculiar to each picture element and its sampling and conversion circuits. Analog-to-digital image processor is massively parallel system for processing data from array of photodetectors. System built as compact integrated circuit located near local plane. Buffer amplifier for each picture element has different offset.

  17. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  18. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  19. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2013-07-31

    This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

  20. SAT Packages--An Update.

    ERIC Educational Resources Information Center

    Staples, Betsy

    1985-01-01

    Describes software packages used to help prepare for the Scholastic Aptitude Test, considering features that the packages have in common as well as unique features that each package has. Also lists and ranks the products and their features in a chart (indicating system requirements and current cost). (JN)

  1. Sustainable Library Development Training Package

    ERIC Educational Resources Information Center

    Peace Corps, 2012

    2012-01-01

    This Sustainable Library Development Training Package supports Peace Corps' Focus In/Train Up strategy, which was implemented following the 2010 Comprehensive Agency Assessment. Sustainable Library Development is a technical training package in Peace Corps programming within the Education sector. The training package addresses the Volunteer…

  2. Parallel system simulation

    SciTech Connect

    Tai, H.M.; Saeks, R.

    1984-03-01

    A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

  3. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  4. KAPPA -- Kernel Application Package

    NASA Astrophysics Data System (ADS)

    Currie, Malcolm J.; Berry, David. S.

    KAPPA is an applications package comprising about 180 general-purpose commands for image processing, data visualisation, and manipulation of the standard Starlink data format---the NDF. It is intended to work in conjunction with Starlink's various specialised packages. In addition to the NDF, KAPPA can also process data in other formats by using the `on-the-fly' conversion scheme. Many commands can process data arrays of arbitrary dimension, and others work on both spectra and images. KAPPA operates from both the UNIX C-shell and the ICL command language. This document describes how to use KAPPA and its features. There is some description of techniques too, including a section on writing scripts. This document includes several tutorials and is illustrated with numerous examples. The bulk of this document comprises detailed descriptions of each command as well as classified and alphabetical summaries.

  5. Anticounterfeit packaging technologies

    PubMed Central

    Shah, Ruchir Y.; Prajapati, Prajesh N.; Agrawal, Y. K.

    2010-01-01

    Packaging is the coordinated system that encloses and protects the dosage form. Counterfeit drugs are the major cause of morbidity, mortality, and failure of public interest in the healthcare system. High price and well-known brands make the pharma market most vulnerable, which accounts for top priority cardiovascular, obesity, and antihyperlipidemic drugs and drugs like sildenafil. Packaging includes overt and covert technologies like barcodes, holograms, sealing tapes, and radio frequency identification devices to preserve the integrity of the pharmaceutical product. But till date all the available techniques are synthetic and although provide considerable protection against counterfeiting, have certain limitations which can be overcome by the application of natural approaches and utilization of the principles of nanotechnology. PMID:22247875

  6. Anticounterfeit packaging technologies.

    PubMed

    Shah, Ruchir Y; Prajapati, Prajesh N; Agrawal, Y K

    2010-10-01

    Packaging is the coordinated system that encloses and protects the dosage form. Counterfeit drugs are the major cause of morbidity, mortality, and failure of public interest in the healthcare system. High price and well-known brands make the pharma market most vulnerable, which accounts for top priority cardiovascular, obesity, and antihyperlipidemic drugs and drugs like sildenafil. Packaging includes overt and covert technologies like barcodes, holograms, sealing tapes, and radio frequency identification devices to preserve the integrity of the pharmaceutical product. But till date all the available techniques are synthetic and although provide considerable protection against counterfeiting, have certain limitations which can be overcome by the application of natural approaches and utilization of the principles of nanotechnology. PMID:22247875

  7. The Ettention software package.

    PubMed

    Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp

    2016-02-01

    We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. PMID:26686659

  8. RH Packaging Program Guidance

    SciTech Connect

    Washington TRU Solutions, LLC

    2003-08-25

    The purpose of this program guidance document is to provide technical requirements for use, operation, inspection, and maintenance of the RH-TRU 72-B Waste Shipping Package and directly related components. This document complies with the requirements as specified in the RH-TRU 72-B Safety Analysis Report for Packaging (SARP), and Nuclear Regulatory Commission (NRC) Certificate of Compliance (C of C) 9212. If there is a conflict between this document and the SARP and/or C of C, the SARP and/or C of C shall govern. The C of C states: ''...each package must be prepared for shipment and operated in accordance with the procedures described in Chapter 7.0, ''Operating Procedures,'' of the application.'' It further states: ''...each package must be tested and maintained in accordance with the procedures described in Chapter 8.0, ''Acceptance Tests and Maintenance Program of the Application.'' Chapter 9.0 of the SARP tasks the Waste Isolation Pilot Plant (WIPP) Management and Operating (M&O) contractor with assuring the packaging is used in accordance with the requirements of the C of C. Because the packaging is NRC approved, users need to be familiar with 10 CFR {section} 71.11, ''Deliberate Misconduct.'' Any time a user suspects or has indications that the conditions of approval in the C of C were not met, the Carlsbad Field Office (CBFO) shall be notified immediately. CBFO will evaluate the issue and notify the NRC if required. This document details the instructions to be followed to operate, maintain, and test the RH-TRU 72-B packaging. This Program Guidance standardizes instructions for all users. Users shall follow these instructions. Following these instructions assures that operations are safe and meet the requirements of the SARP. This document is available on the Internet at: ttp://www.ws/library/t2omi/t2omi.htm. Users are responsible for ensuring they are using the current revision and change notices. Sites may prepare their own document using the word

  9. Aquaculture information package

    SciTech Connect

    Boyd, T.; Rafferty, K.

    1998-08-01

    This package of information is intended to provide background information to developers of geothermal aquaculture projects. The material is divided into eight sections and includes information on market and price information for typical species, aquaculture water quality issues, typical species culture information, pond heat loss calculations, an aquaculture glossary, regional and university aquaculture offices and state aquaculture permit requirements. A bibliography containing 68 references is also included.

  10. Software packager user's guide

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.

  11. Navy packaging standardization thrusts

    NASA Astrophysics Data System (ADS)

    Kidwell, J. R.

    1982-11-01

    Standardization is a concept that is basic to our world today. The idea of reducing costs through the economics of mass production is an easy one to grasp. Henry Ford started the process of large scale standardization in this country with the Detroit production lines for his automobiles. In the process additional benefits accrued, such as improved reliability through design maturity, off-the-shelf repair parts, faster repair time, and a resultant lower cost of ownership (lower life-cycle cost). The need to attain standardization benefits with military equipments exists now. Defense budgets, although recently increased, are not going to permit us to continue the tremendous investment required to maintain even the status quo and develop new hardware at the same time. Needed are more reliable, maintainable, testable hardware in the Fleet. It is imperative to recognize the obsolescence problems created by the use of high technology devices in our equipments, and find ways to combat these shortfalls. The Navy has two packaging standardization programs that will be addressed in this paper; the Standard Electronic Modules and the Modular Avionics Packaging programs. Following a brief overview of the salient features of each program, the packaging technology aspects of the program will be addressed, and developmental areas currently being investigated will be identified.

  12. ISSUES ASSOCIATED WITH SAFE PACKAGING AND TRANSPORT OF NANOPARTICLES

    SciTech Connect

    Gupta, N.; Smith, A.

    2011-02-14

    Nanoparticles have long been recognized a hazardous substances by personnel working in the field. They are not, however, listed as a separate, distinct category of dangerous goods at present. As dangerous goods or hazardous substances, they require packaging and transportation practices which parallel the established practices for hazardous materials transport. Pending establishment of a distinct category for such materials by the Department of Transportation, existing consensus or industrial protocols must be followed. Action by DOT to establish appropriate packaging and transport requirements is recommended.

  13. 78 FR 19007 - Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... COMMISSION Certain Products Having Laminated Packaging, Laminated Packaging, and Components Thereof.... 1337, on behalf of Lamina Packaging Innovations LLC of Longview, Texas. An amended complaint was filed... importation of certain products having laminated packaging, laminated packaging, and components thereof...

  14. 78 FR 13083 - Products Having Laminated Packaging, Laminated Packaging, and Components Thereof; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-26

    ... COMMISSION Products Having Laminated Packaging, Laminated Packaging, and Components Thereof; Notice of... Commission has received a complaint entitled Products Having Laminated ] Packaging, Laminated Packaging, and... filed on behalf of Lamina Packaging Innovations LLC on February 20, 2013. The complaint...

  15. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  16. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  17. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  18. Partitioning and parallel radiosity

    NASA Astrophysics Data System (ADS)

    Merzouk, S.; Winkler, C.; Paul, J. C.

    1996-03-01

    This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

  19. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  20. Plutonium stabilization and packaging system

    SciTech Connect

    1996-05-01

    This document describes the functional design of the Plutonium Stabilization and Packaging System (Pu SPS). The objective of this system is to stabilize and package plutonium metals and oxides of greater than 50% wt, as well as other selected isotopes, in accordance with the requirements of the DOE standard for safe storage of these materials for 50 years. This system will support completion of stabilization and packaging campaigns of the inventory at a number of affected sites before the year 2002. The package will be standard for all sites and will provide a minimum of two uncontaminated, organics free confinement barriers for the packaged material.

  1. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  2. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  3. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  4. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  5. 21 CFR 355.20 - Packaging conditions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (toothpastes and tooth powders) packages shall not contain more than 276 milligrams (mg) total fluorine per... packages shall not contain more than 120 mg total fluorine per package. (3) Exception. Package...

  6. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  7. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  8. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  9. Teuchos Utility Package

    Energy Science and Technology Software Center (ESTSC)

    2004-03-01

    Teuchos is designed to provide portable, object-oriented tools for Trillnos developers and users. This includes templated wrappers to BLAS/LAPACK, a serial dense matrix class, a parameter list, XML parsing utilities, reference counted pointer (smart pointer) utilities, and more. These tools are designed to run on both serial and parallel computers.

  10. Packaging - Materials review

    NASA Astrophysics Data System (ADS)

    Herrmann, Matthias

    2014-06-01

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  11. Packaging - Materials review

    SciTech Connect

    Herrmann, Matthias

    2014-06-16

    Nowadays, a large number of different electrochemical energy storage systems are known. In the last two decades the development was strongly driven by a continuously growing market of portable electronic devices (e.g. cellular phones, lap top computers, camcorders, cameras, tools). Current intensive efforts are under way to develop systems for automotive industry within the framework of electrically propelled mobility (e.g. hybrid electric vehicles, plug-in hybrid electric vehicles, full electric vehicles) and also for the energy storage market (e.g. electrical grid stability, renewable energies). Besides the different systems (cell chemistries), electrochemical cells and batteries were developed and are offered in many shapes, sizes and designs, in order to meet performance and design requirements of the widespread applications. Proper packaging is thereby one important technological step for designing optimum, reliable and safe batteries for operation. In this contribution, current packaging approaches of cells and batteries together with the corresponding materials are discussed. The focus is laid on rechargeable systems for industrial applications (i.e. alkaline systems, lithium-ion, lead-acid). In principle, four different cell types (shapes) can be identified - button, cylindrical, prismatic and pouch. Cell size can be either in accordance with international (e.g. International Electrotechnical Commission, IEC) or other standards or can meet application-specific dimensions. Since cell housing or container, terminals and, if necessary, safety installations as inactive (non-reactive) materials reduce energy density of the battery, the development of low-weight packages is a challenging task. In addition to that, other requirements have to be fulfilled: mechanical stability and durability, sealing (e.g. high permeation barrier against humidity for lithium-ion technology), high packing efficiency, possible installation of safety devices (current interrupt device

  12. Aristos Optimization Package

    Energy Science and Technology Software Center (ESTSC)

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less

  13. Safety Analysis Report for packaging (onsite) steel waste package

    SciTech Connect

    BOEHNKE, W.M.

    2000-07-13

    The steel waste package is used primarily for the shipment of remote-handled radioactive waste from the 324 Building to the 200 Area for interim storage. The steel waste package is authorized for shipment of transuranic isotopes. The maximum allowable radioactive material that is authorized is 500,000 Ci. This exceeds the highway route controlled quantity (3,000 A{sub 2}s) and is a type B packaging.

  14. Parallelization of Rocket Engine System Software (Press)

    NASA Technical Reports Server (NTRS)

    Cezzar, Ruknet

    1996-01-01

    The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.

  15. Parallel Analysis Tools for Ultra-Large Climate Data Sets

    NASA Astrophysics Data System (ADS)

    Jacob, Robert; Krishna, Jayesh; Xu, Xiabing; Mickelson, Sheri; Wilde, Mike; Peterson, Kara; Bochev, Pavel; Latham, Robert; Tautges, Tim; Brown, David; Brownrigg, Richard; Haley, Mary; Shea, Dennis; Huang, Wei; Middleton, Don; Schuchardt, Karen; Yin, Jian

    2013-04-01

    While climate models have used parallelism for several years, the post-processing tools are still mostly single-threaded applications and many are closed source. These tools are becoming a bottleneck in the production of new climate knowledge when they confront terabyte-sized output from high-resolution climate models. The ParVis project is using and creating Free and Open Source tools that bring data and task parallelism to climate model analysis to enable analysis of large climate data sets. ParVis is using the Swift task-parallel language to implement a diagnostic suite that generates over 600 plots of atmospheric quantities. ParVis has also created a Parallel Gridded Analysis Library (ParGAL) which implements many common climate analysis operations in a data-parallel fashion using the Message Passing Interface. ParGAL has in turn been built on sophisticated packages for describing grids in parallel (the Mesh Oriented database (MOAB), performing vector operations on arbitrary grids (Intrepid) and reading data in parallel (PnetCDF). ParGAL is being used to implement a parallel version of the NCAR Command Language (NCL) called ParNCL. ParNCL/ParCAL not only speeds up analysis of large datasets but also allows operations to be performed on native grids, eliminating the need to transform data to latitude-longitude grids. All of the tools ParVis is creating are available as free and open source software.

  16. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  17. Tamper indicating packaging

    SciTech Connect

    Baumann, M.J.; Bartberger, J.C.; Welch, T.D.

    1994-08-01

    Protecting sensitive items from undetected tampering in an unattended environment is crucial to the success of non-proliferation efforts relying on the verification of critical activities. Tamper Indicating Packaging (TIP) technologies are applied to containers, packages, and equipment that require an indication of a tamper attempt. Examples include: the transportation and storage of nuclear material, the operation and shipment of surveillance equipment and monitoring sensors, and the retail storage of medicine and food products. The spectrum of adversarial tampering ranges from attempted concealment of a pin-hole sized penetration to the complete container replacement, which would involve counterfeiting efforts of various degrees. Sandia National Laboratories (SNL) has developed a technology base for advanced TIP materials, sensors, designs, and processes which can be adapted to various future monitoring systems. The purpose of this technology base is to investigate potential new technologies, and to perform basic research of advanced technologies. This paper will describe the theory of TIP technologies and recent investigations of TIP technologies at SNL.

  18. Japan's electronic packaging technologies

    NASA Astrophysics Data System (ADS)

    Tummala, Rao R.; Pecht, Michael

    1995-02-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  19. Japan's electronic packaging technologies

    NASA Technical Reports Server (NTRS)

    Tummala, Rao R.; Pecht, Michael

    1995-01-01

    The JTEC panel found Japan to have significant leadership over the United States in the strategic area of electronic packaging. Many technologies and products once considered the 'heart and soul' of U.S. industry have been lost over the past decades to Japan and other Asian countries. The loss of consumer electronics technologies and products is the most notable of these losses, because electronics is the United States' largest employment sector and is critical for growth businesses in consumer products, computers, automobiles, aerospace, and telecommunications. In the past there was a distinction between consumer and industrial product technologies. While Japan concentrated on the consumer market, the United States dominated the industrial sector. No such distinction is anticipated in the future; the consumer-oriented technologies Japan has dominated are expected to characterize both domains. The future of U.S. competitiveness will, therefore, depend on the ability of the United States to rebuild its technological capabilities in the area of portable electronic packaging.

  20. Space station power semiconductor package

    NASA Technical Reports Server (NTRS)

    Balodis, Vilnis; Berman, Albert; Devance, Darrell; Ludlow, Gerry; Wagner, Lee

    1987-01-01

    A package of high-power switching semiconductors for the space station have been designed and fabricated. The package includes a high-voltage (600 volts) high current (50 amps) NPN Fast Switching Power Transistor and a high-voltage (1200 volts), high-current (50 amps) Fast Recovery Diode. The package features an isolated collector for the transistors and an isolated anode for the diode. Beryllia is used as the isolation material resulting in a thermal resistance for both devices of .2 degrees per watt. Additional features include a hermetical seal for long life -- greater than 10 years in a space environment. Also, the package design resulted in a low electrical energy loss with the reduction of eddy currents, stray inductances, circuit inductance, and capacitance. The required package design and device parameters have been achieved. Test results for the transistor and diode utilizing the space station package is given.

  1. Performance characteristics of a cosmology package on leading HPCarchitectures

    SciTech Connect

    Carter, Jonathan; Borrill, Julian; Oliker, Leonid

    2004-01-01

    The Cosmic Microwave Background (CMB) is a snapshot of the Universe some 400,000 years after the Big Bang. The pattern of anisotropies in the CMB carries a wealth of information about the fundamental parameters of cosmology. Extracting this information is an extremely computationally expensive endeavor, requiring massively parallel computers and software packages capable of exploiting them. One such package is the Microwave Anisotropy Dataset Computational Analysis Package (MADCAP) which has been used to analyze data from a number of CMB experiments. In this work, we compare MADCAP performance on the vector-based Earth Simulator (ES) and Cray X1 architectures and two leading superscalar systems, the IBM Power3 and Power4. Our results highlight the complex interplay between the problem size, architectural paradigm, interconnect, and vendor-supplied numerical libraries, while isolating the I/O file system as the key bottleneck across all the platforms.

  2. User's Guide for ENSAERO_FE Parallel Finite Element Solver

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Guruswamy, Guru P.

    1999-01-01

    A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.

  3. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  4. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  5. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  6. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  7. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  8. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  9. Packaging investigation of optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Zhike, Zhang; Yu, Liu; Jianguo, Liu; Ninghua, Zhu

    2015-10-01

    Compared with microelectronic packaging, optoelectronic packaging as a new packaging type has been developed rapidly and it will play an essential role in optical communication. In this paper, we try to summarize the development history, research status, technology issues and future prospects, and hope to provide a meaningful reference. Project supported by the National High Technology Research and Development Program of China (Nos. 2013AA014201, 2013AA014203) and the National Natural Science Foundation of China (Nos. 61177080, 61335004, 61275031).

  10. IN-PACKAGE CHEMISTRY ABSTRACTION

    SciTech Connect

    E. Thomas

    2005-07-14

    This report was developed in accordance with the requirements in ''Technical Work Plan for Postclosure Waste Form Modeling'' (BSC 2005 [DIRS 173246]). The purpose of the in-package chemistry model is to predict the bulk chemistry inside of a breached waste package and to provide simplified expressions of that chemistry as a function of time after breach to Total Systems Performance Assessment for the License Application (TSPA-LA). The scope of this report is to describe the development and validation of the in-package chemistry model. The in-package model is a combination of two models, a batch reactor model, which uses the EQ3/6 geochemistry-modeling tool, and a surface complexation model, which is applied to the results of the batch reactor model. The batch reactor model considers chemical interactions of water with the waste package materials, and the waste form for commercial spent nuclear fuel (CSNF) waste packages and codisposed (CDSP) waste packages containing high-level waste glass (HLWG) and DOE spent fuel. The surface complexation model includes the impact of fluid-surface interactions (i.e., surface complexation) on the resulting fluid composition. The model examines two types of water influx: (1) the condensation of water vapor diffusing into the waste package, and (2) seepage water entering the waste package as a liquid from the drift. (1) Vapor-Influx Case: The condensation of vapor onto the waste package internals is simulated as pure H{sub 2}O and enters at a rate determined by the water vapor pressure for representative temperature and relative humidity conditions. (2) Liquid-Influx Case: The water entering a waste package from the drift is simulated as typical groundwater and enters at a rate determined by the amount of seepage available to flow through openings in a breached waste package.