Sample records for cray xt5 system

  1. OPAL: An Open-Source MPI-IO Library over Cray XT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Weikuan; Vetter, Jeffrey S; Canon, Richard Shane

    Parallel IO over Cray XT is supported by a vendor-supplied MPI-IO package. This package contains a proprietary ADIO implementation built on top of the sysio library. While it is reasonable to maintain a stable code base for application scientists' convenience, it is also very important to the system developers and researchers to analyze and assess the effectiveness of parallel IO software, and accordingly, tune and optimize the MPI-IO implementation. A proprietary parallel IO code base relinquishes such flexibilities. On the other hand, a generic UFS-based MPI-IO implementation is typically used on many Linux-based platforms. We have developed an open-source MPI-IOmore » package over Lustre, referred to as OPAL (OPportunistic and Adaptive MPI-IO Library over Lustre). OPAL provides a single source-code base for MPI-IO over Lustre on Cray XT and Linux platforms. Compared to Cray implementation, OPAL provides a number of good features, including arbitrary specification of striping patterns and Lustre-stripe aligned file domain partitioning. This paper presents the performance comparisons between OPAL and Cray's proprietary implementation. Our evaluation demonstrates that OPAL achieves the performance comparable to the Cray implementation. We also exemplify the benefits of an open source package in revealing the underpinning of the parallel IO performance.« less

  2. Engineering PFLOTRAN for Scalable Performance on Cray XT and IBM BlueGene Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Sripathi, Vamsi K; Mahinthakumar, Gnanamanika

    We describe PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - and the approaches we have employed to obtain scalable performance on some of the largest scale supercomputers in the world. We present detailed analyses of I/O and solver performance on Jaguar, the Cray XT5 at Oak Ridge National Laboratory, and Intrepid, the IBM BlueGene/P at Argonne National Laboratory, that have guided our choice of algorithms.

  3. Integrating Grid Services into the Cray XT4 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy

    2009-05-01

    The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less

  4. Optimizing Blocking and Nonblocking Reduction Operations for Multicore Systems: Hierarchical Design and Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L

    2013-01-01

    Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth ofmore » hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate Gradient solver, shows that the Cheetah reductions speeds up total time to solution by 195%, demonstrating the potential benefits for scientific simulations.« less

  5. Implementation of the Automated Numerical Model Performance Metrics System

    DTIC Science & Technology

    2011-09-26

    question. As of this writing, the DSRC IBM AIX machines DaVinci and Pascal, and the Cray XT Einstein all use the PBS batch queuing system for...3.3). 12 Appendix A – General Automation System This system provides general purpose tools and a general way to automatically run

  6. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  7. NAVO MSRC Navigator. Fall 2008

    DTIC Science & Technology

    2008-01-01

    arrival of our two new HPC systems, DAVINCI (IBM P6) and EINSTEIN (Cray XT5), and our new mass storage server, NEWTON (Sun M5000). “The most...will run on both DAVINCI and EINSTEIN, providing researchers with the capability of running jobs of up to 4,256 and 12,736 cores in size...are expected to double as EINSTEIN and DAVINCI are brought online. We have also strengthened the backbone of our Disaster Recovery infrastructure, as

  8. Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav

    2013-01-21

    Abstract—Three types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Cray’s proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In thismore » work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less

  9. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  10. Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel

    2011-01-01

    The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less

  11. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads permore » MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.« less

  12. ERDC MSRC (Major Shared Resource Center) Resource. Spring 2008

    DTIC Science & Technology

    2008-01-01

    obtained from ADCIRC results. The alpha test was performed on the Cray XT3 machine (Sapphire) at ERDC and the IBM P575+ system ( Babbage ) at the...2008 20 Scotty Swillie (center) and Charles Ray (far right) were part of the team that constructed the DoD HPCMP booth for the Conference (From

  13. Performance of the fusion code GYRO on four generations of Cray computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahey, Mark R

    2014-01-01

    GYRO is a code used for the direct numerical simulation of plasma microturbulence. It has been ported to a variety of modern MPP platforms including several modern commodity clusters, IBM SPs, and Cray XC, XT, and XE series machines. We briefly describe the mathematical structure of the equations, the data layout, and the redistribution scheme. Also, while the performance and scaling of GYRO on many of these systems has been shown before, here we show the comparative performance and scaling on four generations of Cray supercomputers including the newest addition - the Cray XC30. The more recently added hybrid OpenMP/MPImore » imple- mentation also shows a great deal of promise on custom HPC systems that utilize fast CPUs and proprietary interconnects. Four machines of varying sizes were used in the experiment, all of which are located at the National Institute for Computational Sciences at the University of Tennessee at Knoxville and Oak Ridge National Laboratory. The advantages, limitations, and performance of using each system are discussed.« less

  14. Porting the AVS/Express scientific visualization software to Cray XT4.

    PubMed

    Leaver, George W; Turner, Martin J; Perrin, James S; Mummery, Paul M; Withers, Philip J

    2011-08-28

    Remote scientific visualization, where rendering services are provided by larger scale systems than are available on the desktop, is becoming increasingly important as dataset sizes increase beyond the capabilities of desktop workstations. Uptake of such services relies on access to suitable visualization applications and the ability to view the resulting visualization in a convenient form. We consider five rules from the e-Science community to meet these goals with the porting of a commercial visualization package to a large-scale system. The application uses message-passing interface (MPI) to distribute data among data processing and rendering processes. The use of MPI in such an interactive application is not compatible with restrictions imposed by the Cray system being considered. We present details, and performance analysis, of a new MPI proxy method that allows the application to run within the Cray environment yet still support MPI communication required by the application. Example use cases from materials science are considered.

  15. User and Performance Impacts from Franklin Upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yun

    2009-05-10

    The NERSC flagship computer Cray XT4 system"Franklin" has gone through three major upgrades: quad core upgrade, CLE 2.1 upgrade, and IO upgrade, during the past year. In this paper, we will discuss the various aspects of the user impacts such as user access, user environment, and user issues etc from these upgrades. The performance impacts on the kernel benchmarks and selected application benchmarks will also be presented.

  16. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less

  18. Franklin: User Experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    National Energy Research Supercomputing Center; He, Yun; Kramer, William T.C.

    2008-05-07

    The newest workhorse of the National Energy Research Scientific Computing Center is a Cray XT4 with 9,736 dual core nodes. This paper summarizes Franklin user experiences from friendly early user period to production period. Selected successful user stories along with top issues affecting user experiences are presented.

  19. Parallel FEM Simulation of Electromechanics in the Heart

    NASA Astrophysics Data System (ADS)

    Xia, Henian; Wong, Kwai; Zhao, Xiaopeng

    2011-11-01

    Cardiovascular disease is the leading cause of death in America. Computer simulation of complicated dynamics of the heart could provide valuable quantitative guidance for diagnosis and treatment of heart problems. In this paper, we present an integrated numerical model which encompasses the interaction of cardiac electrophysiology, electromechanics, and mechanoelectrical feedback. The model is solved by finite element method on a Linux cluster and the Cray XT5 supercomputer, kraken. Dynamical influences between the effects of electromechanics coupling and mechanic-electric feedback are shown.

  20. Deploying Server-side File System Monitoring at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  1. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  2. Understanding Aprun Use Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Hwa-Chun Wendy

    2009-05-06

    On the Cray XT, aprun is the command to launch an application to a set of compute nodes reserved through the Application Level Placement Scheduler (ALPS). At the National Energy Research Scientific Computing Center (NERSC), interactive aprun is disabled. That is, invocations of aprun have to go through the batch system. Batch scripts can and often do contain several apruns which either use subsets of the reserved nodes in parallel, or use all reserved nodes in consecutive apruns. In order to better understand how NERSC users run on the XT, it is necessary to associate aprun information with jobs. Itmore » is surprisingly more challenging than it sounds. In this paper, we describe those challenges and how we solved them to produce daily per-job reports for completed apruns. We also describe additional uses of the data, e.g. adjusting charging policy accordingly or associating node failures with jobs/users, and plans for enhancements.« less

  3. Modeling Subsurface Reactive Flows Using Leadership-Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Hammond, Glenn; Lichtner, Peter

    2009-01-01

    We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  4. The Portals 4.0 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2012-11-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less

  5. The portals 4.0.1 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2013-04-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less

  6. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  7. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan

    This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tarmore » geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.« less

  9. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  10. Cheetah: A Framework for Scalable Hierarchical Collective Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S

    2011-01-01

    Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passingmore » Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.« less

  11. The linearly scaling 3D fragment method for large scale electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak

    2009-07-28

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less

  12. The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak

    2009-06-26

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less

  13. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  14. Multitasking the three-dimensional transport code TORT on CRAY platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.; Burre, C.A.

    1996-04-01

    The multitasking options in the three-dimensional neutral particle transport code TORT originally implemented for Cray`s CTSS operating system are revived and extended to run on Cray Y/MP and C90 computers using the UNICOS operating system. These include two coarse-grained domain decompositions; across octants, and across directions within an octant, termed Octant Parallel (OP), and Direction Parallel (DP), respectively. Parallel performance of the DP is significantly enhanced by increasing the task grain size and reducing load imbalance via dynamic scheduling of the discrete angles among the participating tasks. Substantial Wall Clock speedup factors, approaching 4.5 using 8 tasks, have been measuredmore » in a time-sharing environment, and generally depend on the test problem specifications, number of tasks, and machine loading during execution.« less

  15. Gene expression and localization of two types of AQP5 in Xenopus tropicalis under hydration and dehydration.

    PubMed

    Shibata, Yuki; Sano, Takahiro; Tsuchiya, Nobuhito; Okada, Reiko; Mochida, Hiroshi; Tanaka, Shigeyasu; Suzuki, Masakazu

    2014-07-01

    Two types of aquaporin 5 (AQP5) genes (aqp-xt5a and aqp-xt5b) were identified in the genome of Xenopus tropicalis by synteny comparison and molecular phylogenetic analysis. When the frogs were in water, AQP-xt5a mRNA was expressed in the skin and urinary bladder. The expression of AQP-xt5a mRNA was significantly increased in dehydrated frogs. AQP-xt5b mRNA was also detected in the skin and increased in response to dehydration. Additionally, AQP-xt5b mRNA began to be slightly expressed in the lung and stomach after dehydration. For the pelvic skin of hydrated frogs, immunofluorescence staining localized AQP-xt5a and AQP-xt5b to the cytoplasm of secretory cells of the granular glands and the apical plasma membrane of secretory cells of the small granular glands, respectively. After dehydration, the locations of both AQPs in their respective glands did not change, but AQP-xt5a was visualized in the cytoplasm of secretory cells of the small granular glands. For the urinary bladder, AQP-xt5a was observed in the apical plasma membrane and cytoplasm of a number of granular cells under normal hydration. After dehydration, AQP-xt5a was found in the apical membrane and cytoplasm of most granular cells. Injection of vasotocin into hydrated frogs did not induce these changes in the localization of AQP-xt5a in the small granular glands and urinary bladder, however. The results suggest that AQP-xt5a might be involved in water reabsorption from the urinary bladder during dehydration, whereas AQP-xt5b might play a role in water secretion from the small granular gland. Copyright © 2014 the American Physiological Society.

  16. Revisiting Parallel Cyclic Reduction and Parallel Prefix-Based Algorithms for Block Tridiagonal System of Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K; Perumalla, Kalyan S; Hirshman, Steven Paul

    2013-01-01

    Simulations that require solutions of block tridiagonal systems of equations rely on fast parallel solvers for runtime efficiency. Leading parallel solvers that are highly effective for general systems of equations, dense or sparse, are limited in scalability when applied to block tridiagonal systems. This paper presents scalability results as well as detailed analyses of two parallel solvers that exploit the special structure of block tridiagonal matrices to deliver superior performance, often by orders of magnitude. A rigorous analysis of their relative parallel runtimes is shown to reveal the existence of a critical block size that separates the parameter space spannedmore » by the number of block rows, the block size and the processor count, into distinct regions that favor one or the other of the two solvers. Dependence of this critical block size on the above parameters as well as on machine-specific constants is established. These formal insights are supported by empirical results on up to 2,048 cores of a Cray XT4 system. To the best of our knowledge, this is the highest reported scalability for parallel block tridiagonal solvers to date.« less

  17. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  18. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  19. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  20. A molecular dynamics implementation of the 3D Mercedes-Benz water model

    NASA Astrophysics Data System (ADS)

    Hynninen, T.; Dias, C. L.; Mkrtchyan, A.; Heinonen, V.; Karttunen, M.; Foster, A. S.; Ala-Nissila, T.

    2012-02-01

    The three-dimensional Mercedes-Benz model was recently introduced to account for the structural and thermodynamic properties of water. It treats water molecules as point-like particles with four dangling bonds in tetrahedral coordination, representing H-bonds of water. Its conceptual simplicity renders the model attractive in studies where complex behaviors emerge from H-bond interactions in water, e.g., the hydrophobic effect. A molecular dynamics (MD) implementation of the model is non-trivial and we outline here the mathematical framework of its force-field. Useful routines written in modern Fortran are also provided. This open source code is free and can easily be modified to account for different physical context. The provided code allows both serial and MPI-parallelized execution. Program summaryProgram title: CASHEW (Coarse Approach Simulator for Hydrogen-bonding Effects in Water) Catalogue identifier: AEKM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 20 501 No. of bytes in distributed program, including test data, etc.: 551 044 Distribution format: tar.gz Programming language: Fortran 90 Computer: Program has been tested on desktop workstations and a Cray XT4/XT5 supercomputer. Operating system: Linux, Unix, OS X Has the code been vectorized or parallelized?: The code has been parallelized using MPI. RAM: Depends on size of system, about 5 MB for 1500 molecules. Classification: 7.7 External routines: A random number generator, Mersenne Twister ( http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/VERSIONS/FORTRAN/mt95.f90), is used. A copy of the code is included in the distribution. Nature of problem: Molecular dynamics simulation of a new geometric water model. Solution method: New force-field for water molecules, velocity-Verlet integration, representation of molecules as rigid particles with rotations described using quaternion algebra. Restrictions: Memory and cpu time limit the size of simulations. Additional comments: Software web site: https://gitorious.org/cashew/. Running time: Depends on the size of system. The sample tests provided only take a few seconds.

  1. Self-expanding Portico Valve Versus Balloon-expandable SAPIEN XT Valve in Patients With Small Aortic Annuli: Comparison of Hemodynamic Performance.

    PubMed

    Del Trigo, María; Dahou, Abdellaziz; Webb, John G; Dvir, Danny; Puri, Rishi; Abdul-Jawad Altisent, Omar; Campelo-Parada, Francisco; Thompson, Chris; Leipsic, Jonathon; Stub, Dion; DeLarochellière, Robert; Paradis, Jean-Michel; Dumont, Eric; Doyle, Daniel; Mohammadi, Siamak; Pasian, Sergio; Côté, Melanie; Pibarot, Philippe; Rodés-Cabau, Josep

    2016-05-01

    The self-expanding Portico valve is a new transcatheter aortic valve system yielding promising preliminary results, yet there are no comparative data against earlier generation transcatheter aortic valve systems. The aim of this study was to compare the hemodynamic performance of the Portico and balloon-expandable SAPIEN XT valves in a case-matched study with echocardiographic core laboratory analysis. Twenty-two patients underwent transcatheter aortic valve implantation with the Portico 23-mm valve and were matched for aortic annulus area and mean diameter measured by multidetector computed tomography, left ventricular ejection fraction, body surface area, and body mass index with 40 patients treated with the 23-mm SAPIEN XT. Mean aortic annulus diameters were 19.6±1.3mm by transthoracic echocardiography and 21.4±1.2mm by computed tomography, with no significant between-group differences. Doppler echocardiographic images were collected at baseline and at 1-month of follow-up and were analyzed in a central echocardiography core laboratory. There were no significant between-group differences in residual mean transaortic gradients (SAPIEN XT: 10.4±3.7mmHg; Portico: 9.8±1.1mmHg; P=.49) and effective orifice areas (SAPIEN XT: 1.36±0.27cm(2); Portico, 1.37±.29cm(2); P=.54). Rates of severe prosthesis-patient mismatch (effective orifice area<0.65cm(2)/m(2)) were similar (SAPIEN XT: 13.5%; Portico: 10.0%; P=.56). No between-group differences were found in the occurrence of moderate-severe paravalvular leaks (5.0% vs 4.8% of SAPIEN XT and Portico respectively; P=.90). Transcatheter aortic valve implantation with the self-expanding Portico system yielded similar short-term hemodynamic performance compared with the balloon-expandable SAPIEN XT system for treating patients with severe aortic stenosis and small annuli. Further prospective studies with longer-term follow-up and in patients with larger aortic annuli are required. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  2. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  3. Performance of a parallel algebraic multilevel preconditioner for stabilized finite element semiconductor device modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Paul T.; Shadid, John N.; Sala, Marzio

    In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system ismore » obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 10{sup 8} unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system.« less

  4. neXtA5: accelerating annotation of articles via automated approaches in neXtProt.

    PubMed

    Mottin, Luc; Gobeill, Julien; Pasche, Emilie; Michel, Pierre-André; Cusin, Isabelle; Gaudet, Pascale; Ruch, Patrick

    2016-01-01

    The rapid increase in the number of published articles poses a challenge for curated databases to remain up-to-date. To help the scientific community and database curators deal with this issue, we have developed an application, neXtA5, which prioritizes the literature for specific curation requirements. Our system, neXtA5, is a curation service composed of three main elements. The first component is a named-entity recognition module, which annotates MEDLINE over some predefined axes. This report focuses on three axes: Diseases, the Molecular Function and Biological Process sub-ontologies of the Gene Ontology (GO). The automatic annotations are then stored in a local database, BioMed, for each annotation axis. Additional entities such as species and chemical compounds are also identified. The second component is an existing search engine, which retrieves the most relevant MEDLINE records for any given query. The third component uses the content of BioMed to generate an axis-specific ranking, which takes into account the density of named-entities as stored in the Biomed database. The two ranked lists are ultimately merged using a linear combination, which has been specifically tuned to support the annotation of each axis. The fine-tuning of the coefficients is formally reported for each axis-driven search. Compared with PubMed, which is the system used by most curators, the improvement is the following: +231% for Diseases, +236% for Molecular Functions and +3153% for Biological Process when measuring the precision of the top-returned PMID (P0 or mean reciprocal rank). The current search methods significantly improve the search effectiveness of curators for three important curation axes. Further experiments are being performed to extend the curation types, in particular protein-protein interactions, which require specific relationship extraction capabilities. In parallel, user-friendly interfaces powered with a set of JSON web services are currently being implemented into the neXtProt annotation pipeline.Available on: http://babar.unige.ch:8082/neXtA5Database URL: http://babar.unige.ch:8082/neXtA5/fetcher.jsp. © The Author(s) 2016. Published by Oxford University Press.

  5. neXtA5: accelerating annotation of articles via automated approaches in neXtProt

    PubMed Central

    Mottin, Luc; Gobeill, Julien; Pasche, Emilie; Michel, Pierre-André; Cusin, Isabelle; Gaudet, Pascale; Ruch, Patrick

    2016-01-01

    The rapid increase in the number of published articles poses a challenge for curated databases to remain up-to-date. To help the scientific community and database curators deal with this issue, we have developed an application, neXtA5, which prioritizes the literature for specific curation requirements. Our system, neXtA5, is a curation service composed of three main elements. The first component is a named-entity recognition module, which annotates MEDLINE over some predefined axes. This report focuses on three axes: Diseases, the Molecular Function and Biological Process sub-ontologies of the Gene Ontology (GO). The automatic annotations are then stored in a local database, BioMed, for each annotation axis. Additional entities such as species and chemical compounds are also identified. The second component is an existing search engine, which retrieves the most relevant MEDLINE records for any given query. The third component uses the content of BioMed to generate an axis-specific ranking, which takes into account the density of named-entities as stored in the Biomed database. The two ranked lists are ultimately merged using a linear combination, which has been specifically tuned to support the annotation of each axis. The fine-tuning of the coefficients is formally reported for each axis-driven search. Compared with PubMed, which is the system used by most curators, the improvement is the following: +231% for Diseases, +236% for Molecular Functions and +3153% for Biological Process when measuring the precision of the top-returned PMID (P0 or mean reciprocal rank). The current search methods significantly improve the search effectiveness of curators for three important curation axes. Further experiments are being performed to extend the curation types, in particular protein–protein interactions, which require specific relationship extraction capabilities. In parallel, user-friendly interfaces powered with a set of JSON web services are currently being implemented into the neXtProt annotation pipeline. Available on: http://babar.unige.ch:8082/neXtA5 Database URL: http://babar.unige.ch:8082/neXtA5/fetcher.jsp PMID:27374119

  6. Multitasking the three-dimensional shock wave code CTH on the Cray X-MP/416

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGlaun, J.M.; Thompson, S.L.

    1988-01-01

    CTH is a software system under development at Sandia National Laboratories Albuquerque that models multidimensional, multi-material, large-deformation, strong shock wave physics. CTH was carefully designed to both vectorize and multitask on the Cray X-MP/416. All of the physics routines are vectorized except the thermodynamics and the interface tracer. All of the physics routines are multitasked except the boundary conditions. The Los Alamos National Laboratory multitasking library was used for the multitasking. The resulting code is easy to maintain, easy to understand, gives the same answers as the unitasked code, and achieves a measured speedup of approximately 3.5 on the fourmore » cpu Cray. This document discusses the design, prototyping, development, and debugging of CTH. It also covers the architecture features of CTH that enhances multitasking, granularity of the tasks, and synchronization of tasks. The utility of system software and utilities such as simulators and interactive debuggers are also discussed. 5 refs., 7 tabs.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maxwell, Don E; Ezell, Matthew A; Becklehimer, Jeff

    While sites generally have systems in place to monitor the health of Cray computers themselves, often the cooling systems are ignored until a computer failure requires investigation into the source of the failure. The Liebert XDP units used to cool the Cray XE/XK models as well as the Cray proprietary cooling system used for the Cray XC30 models provide data useful for health monitoring. Unfortunately, this valuable information is often available only to custom solutions not accessible by a center-wide monitoring system or is simply ignored entirely. In this paper, methods and tools used to harvest the monitoring data availablemore » are discussed, and the implementation needed to integrate the data into a center-wide monitoring system at the Oak Ridge National Laboratory is provided.« less

  8. Cray Research, Inc. Cray 1-s, Cray FORTRAN translator CFT) version 1. 11 Bugfix 1. Validation summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-09-09

    This Validation Summary Report (VSR) for the Cray Research, Inc., CRAY FORTRAN Translator (CFT) Version 1.11 Bugfix 1 running under the CRAY Operating System (COS) Version 1.12 provides a consolidated summary of the results obtained from the validation of the subject compiler against the 1978 FORTRAN Standard (X3.9-1978/FIPS PUB 69). The compiler was validated against the Full Level FORTRAN level of FIPS PUB 69. The VSR is made up of several sections showing all the discrepancies found -if any. These include an overview of the validation which lists all categories of discrepancies together with the tests which failed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energymore » assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.« less

  10. Implementing TCP/IP and a socket interface as a server in a message-passing operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hipp, E.; Wiltzius, D.

    1990-03-01

    The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.

  11. LASL benchmark performance 1978. [CDC STAR-100, 6600, 7600, Cyber 73, and CRAY-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKnight, A.L.

    1979-08-01

    This report presents the results of running several benchmark programs on a CDC STAR-100, a Cray Research CRAY-1, a CDC 6600, a CDC 7600, and a CDC Cyber 73. The benchmark effort included CRAY-1's at several installations running different operating systems and compilers. This benchmark is part of an ongoing program at Los Alamos Scientific Laboratory to collect performance data and monitor the development trend of supercomputers. 3 tables.

  12. FFTs in external or hierarchical memory

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1989-01-01

    A description is given of advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory. These algorithms (1) require as few as two passes through the external data set, (2) use strictly unit stride, long vector transfers between main memory and external storage, (3) require only a modest amount of scratch space in main memory, and (4) are well suited for vector and parallel computation. Performance figures are included for implementations of some of these algorithms on Cray supercomputers. Of interest is the fact that a main memory version outperforms the current Cray library FFT routines on the Cray-2, the Cray X-MP, and the Cray Y-MP systems. Using all eight processors on the Cray Y-MP, this main memory routine runs at nearly 2 Gflops.

  13. Evaluation of red blood cell and platelet antigen genotyping platforms (ID CORE XT/ID HPA XT) in routine clinical practice.

    PubMed

    Finning, Kirstin; Bhandari, Radhika; Sellers, Fiona; Revelli, Nicoletta; Villa, Maria Antonietta; Muñiz-Díaz, Eduardo; Nogués, Núria

    2016-03-01

    High-throughput genotyping platforms enable simultaneous analysis of multiple polymorphisms for blood group typing. BLOODchip® ID is a genotyping platform based on Luminex® xMAP technology for simultaneous determination of 37 red blood cell (RBC) antigens (ID CORE XT) and 18 human platelet antigens (HPA) (ID HPA XT) using the BIDS XT software. In this international multicentre study, the performance of ID CORE XT and ID HPA XT, using the centres' current genotyping methods as the reference for comparison, and the usability and practicality of these systems, were evaluated under working laboratory conditions. DNA was extracted from whole blood in EDTA with Qiagen methodologies. Ninety-six previously phenotyped/genotyped samples were processed per assay: 87 testing samples plus five positive controls and four negative controls. Results were available for 519 samples: 258 with ID CORE XT and 261 with ID HPA XT. There were three "no calls" that were either caused by human error or resolved after repeating the test. Agreement between the tests and reference methods was 99.94% for ID CORE XT (9,540/9,546 antigens determined) and 100% for ID HPA XT (all 4,698 alleles determined). There were six discrepancies in antigen results in five RBC samples, four of which (in VS, N, S and Do(a)) could not be investigated due to lack of sufficient sample to perform additional tests and two of which (in S and C) were resolved in favour of ID CORE XT (100% accuracy). The total hands-on time was 28-41 minutes for a batch of 16 samples. Compared with the reference platforms, ID CORE XT and ID HPA XT were considered simpler to use and had shorter processing times. ID CORE XT and ID HPA XT genotyping platforms for RBC and platelet systems were accurate and user-friendly in working laboratory settings.

  14. A performance evaluation of the IBM 370/XT personal computer

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  15. Development of potent anti-infective agents from Silurana tropicalis: conformational analysis of the amphipathic, alpha-helical antimicrobial peptide XT-7 and its non-haemolytic analogue [G4K]XT-7.

    PubMed

    Subasinghage, Anusha P; Conlon, J Michael; Hewage, Chandralal M

    2010-04-01

    Peptide XT-7 (GLLGP(5)LLKIA(10)AKVGS(15)NLL.NH(2)) is a cationic, leucine-rich peptide, first isolated from skin secretions of the frog, Silurana tropicalis (Pipidae). The peptide shows potent, broad-spectrum antimicrobial activity but its therapeutic potential is limited by haemolytic activity (LC(50)=140 microM). The analogue [G4K]XT-7, however, retains potent antimicrobial activity but is non-haemolytic (LC(50)>500 microM). In order to elucidate the molecular basis for this difference in properties, the three dimensional structures of XT-7 and the analogue have been investigated by proton NMR spectroscopy and molecular modelling. In aqueous solution, both peptides lack secondary structure. In a 2,2,2-trifluoroethanol (TFE-d(3))-H(2)O mixed solvent system, XT-7 is characterised by a right handed alpha-helical conformation between residues Leu(3) and Leu(17) whereas [G4K]XT-7 adopts a more restricted alpha-helical conformation between residues Leu(6) and Leu(17). A similar conformation for XT-7 in 1,2-dihexanoyl-sn-glycero-3-phosphocholine (DHPC) micellular media was observed with a helical segment between Leu(3) and Leu(17). However, differences in side chain orientations restricting the hydrophilic residues to a smaller patch resulted in an increased hydrophobic surface relative to the conformation in TFE-H(2)O. Molecular modelling of the structures obtained in our study demonstrates the amphipathic character of the helical segments. It is proposed that the marked decrease in haemolytic activity produced by the substitution Gly(4)-->Lys in XT-7 arises from a decrease in both helicity and hydrophobicity. These studies may facilitate the development of potent but non-toxic anti-infective agents based upon the structure of XT-7. Copyright 2009 Elsevier B.V. All rights reserved.

  16. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  17. Ada Compiler Validation Summary Report: Certificate Number: 901112W1. 11116 Cray Research, Inc., Cray Ada Compiler, Release 2.0, Cray X-MP/EA (Host & Target)

    DTIC Science & Technology

    1990-11-12

    This feature prevents any significant unexpected and undesired size overhead introduced by the automatic inlining of a called subprogram. Any...PRESERVELAYOUT forces the 5.5.1 compiler to maintain the Ada source order of a given record type, thereby, preventing the compiler from performing this...Environment, Volme 2: Prgram nng Guide assignments to the copied array in Ada do not affect the Fortran version of the array. The dimensions and order of

  18. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  19. Computing anticipatory systems with incursion and hyperincursion

    NASA Astrophysics Data System (ADS)

    Dubois, Daniel M.

    1998-07-01

    An anticipatory system is a system which contains a model of itself and/or of its environment in view of computing its present state as a function of the prediction of the model. With the concepts of incursion and hyperincursion, anticipatory discrete systems can be modelled, simulated and controlled. By definition an incursion, an inclusive or implicit recursion, can be written as: x(t+1)=F[…,x(t-1),x(t),x(t+1),…] where the value of a variable x(t+1) at time t+1 is a function of this variable at past, present and future times. This is an extension of recursion. Hyperincursion is an incursion with multiple solutions. For example, chaos in the Pearl-Verhulst map model: x(t+1)=a.x(t).[1-x(t)] is controlled by the following anticipatory incursive model: x(t+1)=a.x(t).[1-x(t+1)] which corresponds to the differential anticipatory equation: dx(t)/dt=a.x(t).[1-x(t+1)]-x(t). The main part of this paper deals with the discretisation of differential equation systems of linear and non-linear oscillators. The non-linear oscillator is based on the Lotka-Volterra equations model. The discretisation is made by incursion. The incursive discrete equation system gives the same stability condition than the original differential equations without numerical instabilities. The linearisation of the incursive discrete non-linear Lotka-Volterra equation system gives rise to the classical harmonic oscillator. The incursive discretisation of the linear oscillator is similar to define backward and forward discrete derivatives. A generalized complex derivative is then considered and applied to the harmonic oscillator. Non-locality seems to be a property of anticipatory systems. With some mathematical assumption, the Schrödinger quantum equation is derived for a particle in a uniform potential. Finally an hyperincursive system is given in the case of a neural stack memory.

  20. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  1. Y-MP floating point and Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Carter, Russell

    1991-01-01

    The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.

  2. Performance of a Bounce-Averaged Global Model of Super-Thermal Electron Transport in the Earth's Magnetic Field

    NASA Technical Reports Server (NTRS)

    McGuire, Tim

    1998-01-01

    In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.

  3. ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.

    1994-01-01

    ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.

  4. Ectodermal Influx and Cell Hypertrophy Provide Early Growth for All Murine Mammary Rudiments, and Are Differentially Regulated among Them by Gli3

    PubMed Central

    Lee, May Yin; Racine, Victor; Jagadpramana, Peter; Sun, Li; Yu, Weimiao; Du, Tiehua; Spencer-Dene, Bradley; Rubin, Nicole; Le, Lendy; Ndiaye, Delphine; Bellusci, Saverio; Kratochwil, Klaus; Veltmaat, Jacqueline M.

    2011-01-01

    Mammary gland development starts in utero with one or several pairs of mammary rudiments (MRs) budding from the surface ectodermal component of the mammalian embryonic skin. Mice develop five pairs, numbered MR1 to MR5 from pectoral to inguinal position. We have previously shown that Gli3Xt-J/Xt-J mutant embryos, which lack the transcription factor Gli3, do not form MR3 and MR5. We show here that two days after the MRs emerge, Gli3Xt-J/Xt-J MR1 is 20% smaller, and Gli3Xt-J/Xt-J MR2 and MR4 are 50% smaller than their wild type (wt) counterparts. Moreover, while wt MRs sink into the underlying dermis, Gli3Xt-J/Xt-J MR4 and MR2 protrude outwardly, to different extents. To understand why each of these five pairs of functionally identical organs has its own, distinct response to the absence of Gli3, we determined which cellular mechanisms regulate growth of the individual MRs, and whether and how Gli3 regulates these mechanisms. We found a 5.5 to 10.7-fold lower cell proliferation rate in wt MRs compared to their adjacent surface ectoderm, indicating that MRs do not emerge or grow via locally enhanced cell proliferation. Cell-tracing experiments showed that surface ectodermal cells are recruited toward the positions where MRs emerge, and contribute to MR growth during at least two days. During the second day of MR development, peripheral cells within the MRs undergo hypertrophy, which also contributes to MR growth. Limited apoptotic cell death counterbalances MR growth. The relative contribution of each of these processes varies among the five MRs. Furthermore, each of these processes is impaired in the absence of Gli3, but to different extents in each MR. This differential involvement of Gli3 explains the variation in phenotype among Gli3Xt-J/Xt-J MRs, and may help to understand the variation in numbers and positions of mammary glands among mammals. PMID:22046263

  5. Asymmetric Solar Wind driven substorms from ballooning-interchange and magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Horton, W.

    2013-12-01

    For nonsymmetric currents closing in the northern and southern magnetopause, we find new onset conditions for the ballooning-interchange and magnetic reconnections modes. While these two eigenmodes have opposite symmetries in a classic symmetric geotail geometry as in Prichett-Coroniti-Pellat [GRL1997], this symmetry is broken for real solar winds and a tilted Earth magnetic dipole. Extending earlier work, we show a new model that includes distinct north I_[N] and south I_[S] magnetopause return currents and distinct N-S magnetopause boundary boundary conditions. These conditions drive asymmetric wave functions within the geotail. The wave functions in the high β magnetopause give new onset conditions for substorms. The nonlinear growth rates are estimated and nonlinear FLR-fluid simulations are performed. FLR fluid models with 5 to 7 pde's, are compared qualitatively with the PIC simulations of Prichett-Coroniti [ P-C 2013 and 2011] which used 4 billion particles on a Cray XT5 NSF computer. The P-C 2013 simulations capture some features of the THEMIS data and we look for the corresponding features in the FLR-fluid simulations. The classic reconnection parameter Delta^{'} has a complex generalization for the asymmetric solar wind and IMF on the magnetopause [Horton and Tajima, JGR 1988]. When the mid-tail B_z(x) is such as to give the ballooning-interchange instability we show that in the late stage of the evolutions the nonlinear convective derivatives in the pde-system change the symmetry of the structures producing large magnetic islands of the scale observed by CLUSTER substorm data [ Nakamura et al. 2006]. We conclude that asymmetric models are needed to give reliable forecasting of the onset of subtorms and storms.

  6. Tuning collective communication for Partitioned Global Address Space programming models

    DOE PAGES

    Nishtala, Rajesh; Zheng, Yili; Hargrove, Paul H.; ...

    2011-06-12

    Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues thatmore » are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.« less

  7. A parallel finite-difference method for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.

  8. Attaching IBM-compatible 3380 disks to Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Midlock, J.L.

    1989-01-01

    A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less

  9. Implementation of molecular dynamics and its extensions with the coarse-grained UNRES force field on massively parallel systems; towards millisecond-scale simulations of protein structure, dynamics, and thermodynamics

    PubMed Central

    Liwo, Adam; Ołdziej, Stanisław; Czaplewski, Cezary; Kleinerman, Dana S.; Blood, Philip; Scheraga, Harold A.

    2010-01-01

    We report the implementation of our united-residue UNRES force field for simulations of protein structure and dynamics with massively parallel architectures. In addition to coarse-grained parallelism already implemented in our previous work, in which each conformation was treated by a different task, we introduce a fine-grained level in which energy and gradient evaluation are split between several tasks. The Message Passing Interface (MPI) libraries have been utilized to construct the parallel code. The parallel performance of the code has been tested on a professional Beowulf cluster (Xeon Quad Core), a Cray XT3 supercomputer, and two IBM BlueGene/P supercomputers with canonical and replica-exchange molecular dynamics. With IBM BlueGene/P, about 50 % efficiency and 120-fold speed-up of the fine-grained part was achieved for a single trajectory of a 767-residue protein with use of 256 processors/trajectory. Because of averaging over the fast degrees of freedom, UNRES provides an effective 1000-fold speed-up compared to the experimental time scale and, therefore, enables us to effectively carry out millisecond-scale simulations of proteins with 500 and more amino-acid residues in days of wall-clock time. PMID:20305729

  10. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  11. Using High Performance Computing to Understand Roles of Labile and Nonlabile U(VI) on Hanford 300 Area Plume Longevity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichtner, Peter C.; Hammond, Glenn E.

    Evolution of a hexavalent uranium [U(VI)] plume at the Hanford 300 Area bordering the Columbia River is investigated to evaluate the roles of labile and nonlabile forms of U(VI) on the longevity of the plume. A high fidelity, three-dimensional, field-scale, reactive flow and transport model is used to represent the system. Richards equation coupled to multicomponent reactive transport equations are solved for times up to 100 years taking into account rapid fluctuations in the Columbia River stage resulting in pulse releases of U(VI) into the river. The peta-scale computer code PFLOTRAN developed under a DOE SciDAC-2 project is employed inmore » the simulations and executed on ORNL's Cray XT5 supercomputer Jaguar. Labile U(VI) is represented in the model through surface complexation reactions and its nonlabile form through dissolution of metatorbernite used as a surrogate mineral. Initial conditions are constructed corresponding to the U(VI) plume already in place to avoid uncertainties associated with the lack of historical data for the waste stream. The cumulative U(VI) flux into the river is compared for cases of equilibrium and multirate sorption models and for no sorption. The sensitivity of the U(VI) flux into the river on the initial plume configuration is investigated. The presence of nonlabile U(VI) was found to be essential in explaining the longevity of the U(VI) plume and the prolonged high U(VI) concentrations at the site exceeding the EPA MCL for uranium.« less

  12. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  13. Ada/Xt Architecture: Design Report for the Software Technology for Adaptable, Reliable Systems (STARS)

    DTIC Science & Technology

    1990-01-25

    N Task: UR20 CDRL: 01000 N UR2O--ProcesslEnvironmentx Ada/Xt. Architecture : Design Report ~ ~ fFCp Informal Technical Data I? ,LECp Sofwar Tehoog for...S. FUNDING NUMBERS Ada/Xt Architecture : Design Report STARS Contract 6.AUTHOR(S)_ Ft9628-88-D-0031 6. AUTHOR(S) Kurt Wallnau 7. PERFORMING...of the STARS Prime contract under the Process Environment Integration task (UR20). This document "Ada Xt Architecture : Design Report", type A005

  14. Linear response to nonstationary random excitation.

    NASA Technical Reports Server (NTRS)

    Hasselman, T.

    1972-01-01

    Development of a method for computing the mean-square response of linear systems to nonstationary random excitation of the form given by y(t) = f(t) x(t), in which x(t) = a stationary process and f(t) is deterministic. The method is suitable for application to multidegree-of-freedom systems when the mean-square response at a point due to excitation applied at another point is desired. Both the stationary process, x(t), and the modulating function, f(t), may be arbitrary. The method utilizes a fundamental component of transient response dependent only on x(t) and the system, and independent of f(t) to synthesize the total response. The role played by this component is analogous to that played by the Green's function or impulse response function in the convolution integral.

  15. Measurements of particle backscatter, extinction, and lidar ratio at 1064 nm with the rotational raman method in Polly-XT

    NASA Astrophysics Data System (ADS)

    Engelmann, Ronny; Haarig, Moritz; Baars, Holger; Ansmann, Albert; Kottas, Michael; Marinou, Eleni

    2018-04-01

    We replaced a 1064-nm interference filter of a Polly-XT lidar system by a 1058-nm filter to observe pure rotational Raman backscattering from atmospheric Nitrogen and Oxygen. Polly-XT is compact Raman lidar with a Nd:YAG laser (20 Hz, 200 mJ at 1064 nm) and a 30-cm telescope mirror which applies photomultipliers in photoncounting mode. We present the first measured signals at 1058 nm and the derived extinction profile from measurements aboard RV Polarstern and in Leipzig. In combination with another Polly-XT system we could also derive particle backscatter and lidar ratio profiles at 1064 nm.

  16. ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (DEC RISC ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    Biyabani, S. R.

    1994-01-01

    ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.

  17. Sedative and analgesic effects of intravenous xylazine and tramadol on horses

    PubMed Central

    Seo, Jong-pil; Son, Won-gyun; Gang, Sujin

    2011-01-01

    This study was performed to evaluate the sedative and analgesic effects of xylazine (X) and tramadol (T) intravenously (IV) administered to horses. Six thoroughbred saddle horses each received X (1.0 mg/kg), T (2.0 mg/kg), and a combination of XT (1.0 and 2.0 mg/kg, respectively) IV. Heart rate (HR), respiratory rate (RR), rectal temperature (RT), indirect arterial pressure (IAP), capillary refill time (CRT), sedation, and analgesia (using electrical stimulation and pinprick) were measured before and after drug administration. HR and RR significantly decreased from basal values with X and XT treatments, and significantly increased with T treatment (p < 0.05). RT and IAP also significantly increased with T treatment (p < 0.05). CRT did not change significantly with any treatments. The onset of sedation and analgesia were approximately 5 min after both X and XT treatments; however, the XT combination produced a longer duration of sedation and analgesia than X alone. Two horses in the XT treatment group displayed excited transient behavior within 5 min of drug administration. The results suggest that the XT combination is useful for sedation and analgesia in horses. However, careful monitoring for excited behavior shortly after administration is recommended. PMID:21897102

  18. CRAY mini manual. Revision D

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user.

  19. Effect of blood and saliva contamination on bond strength of brackets bonded with a protective liquid polish and a light-cured adhesive.

    PubMed

    Sayinsu, Korkmaz; Isik, Fulya; Sezen, Serdar; Aydemir, Bulent

    2007-03-01

    The application of a polymer coating to the labial enamel tooth surface before bonding can help keep white spot lesions from forming. Previous studies evaluating the effects of blood and saliva contamination on the bond strengths of light-cured composites showed significant reductions in bond strength values. The purpose of this study was to investigate whether the bond strength of a light-cured system (Transbond XT, 3M Unitek, Puchheim, Germany) used with a liquid polish (BisCover, Bisco, Schaumburg, Ill) is affected by contamination with blood or saliva. One hundred twenty permanent human premolars were randomly divided into 6 groups of 20. Various enamel surface conditions were studied: dry, blood contaminated, and saliva contaminated. A light-cured bonding system (Transbond XT) was used in all groups. The teeth in group 1 were bonded with Transbond XT. In the second group, BisCover polymeric resin polish was applied on the etched tooth surfaces before the brackets were bonded with Transbond XT resin. Comparison of the first and second groups showed no statistically significant difference. Groups 3 through 6 were bonded without Transbond XT. For groups 3 and 5, a layer of blood or saliva, respectively, was applied to the etched enamel followed by BisCover. In groups 4 and 6, blood or saliva, respectively, was applied on the light-cured BisCover. Shear forces were applied to the samples with a universal testing machine, and bond strengths were measured in megapascals. The protective liquid polish (BisCover) layer did not affect bond strength. Blood contamination on acid-etched surfaces affects bond strength more than saliva contamination. When a protective liquid polish (BisCover) is applied to the tooth surface, the effect of contamination by blood or saliva is prevented.

  20. GASNet-EX Performance Improvements Due to Specialization for the Cray Aries Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargrove, Paul H.; Bonachea, Dan

    This document is a deliverable for milestone STPM17-6 of the Exascale Computing Project, delivered by WBS 2.3.1.14. It reports on the improvements in performance observed on Cray XC-series systems due to enhancements made to the GASNet-EX software. These enhancements, known as “specializations”, primarily consist of replacing network-independent implementations of several recently added features with implementations tailored to the Cray Aries network. Performance gains from specialization include (1) Negotiated-Payload Active Messages improve bandwidth of a ping-pong test by up to 14%, (2) Immediate Operations reduce running time of a synthetic benchmark by up to 93%, (3) non-bulk RMA Put bandwidth ismore » increased by up to 32%, (4) Remote Atomic performance is 70% faster than the reference on a point-to-point test and allows a hot-spot test to scale robustly, and (5) non-contiguous RMA interfaces see up to 8.6x speedups for an intra-node benchmark and 26% for inter-node. These improvements are available in the GASNet-EX 2018.3.0 release.« less

  1. Evaluation of a new nano-filled restorative material for bonding orthodontic brackets.

    PubMed

    Bishara, Samir E; Ajlouni, Raed; Soliman, Manal M; Oonsombat, Charuphan; Laffoon, John F; Warren, John

    2007-01-01

    To compare the shear bond strength of a nano-hybrid restorative material, Grandio (Voco, Cuxhaven, Germany), to that of a traditional adhesive material (Transbond XT; 3M Unitek, Monrovia, CA, USA) when bonding orthodontic brackets. Forty teeth were randomly divided into 2 groups: 20 teeth were bonded with the Transbond adhesive system and the other 20 teeth with the Grandio restorative system, following manufacturer's instructions. Student t test was used to compare the shear bond strength of the 2 systems. Significance was predetermined at P 5 .05. The t test comparisons (t = 0.55) of the shear bond strength between the 2 adhesives indicated the absence of a significant (P = .585) difference. The mean shear bond strength for Grandio was 4.1 +/- 2.6 MPa and that for Transbond XT was 4.6 +/- 3.2 MPa. During debonding, 3 of 20 brackets (15%) bonded with Grandio failed without registering any force on the Zwick recording. None of the brackets bonded with Transbond XT had a similar failure mode. The newly introduced nano-filled composite materials can potentially be used to bond orthodontic brackets to teeth if its consistency can be more flowable to readily adhere to the bracket base.

  2. Spectral assessment of new ASTER SWIR surface reflectance data products for spectroscopic mapping of rocks and minerals

    USGS Publications Warehouse

    Mars, J.C.; Rowan, L.C.

    2010-01-01

    ASTER reflectance spectra from Cuprite, Nevada, and Mountain Pass, California, were compared to spectra of field samples and to ASTER-resampled AVIRIS reflectance data to determine spectral accuracy and spectroscopic mapping potential of two new ASTER SWIR reflectance datasets: RefL1b and AST_07XT. RefL1b is a new reflectance dataset produced for this study using ASTER Level 1B data, crosstalk correction, radiance correction factors, and concurrently acquired level 2 MODIS water vapor data. The AST_07XT data product, available from EDC and ERSDAC, incorporates crosstalk correction and non-concurrently acquired MODIS water vapor data for atmospheric correction. Spectral accuracy was determined using difference values which were compiled from ASTER band 5/6 and 9/8 ratios of AST_07XT or RefL1b data subtracted from similar ratios calculated for field sample and AVIRIS reflectance data. In addition, Spectral Analyst, a statistical program that utilizes a Spectral Feature Fitting algorithm, was used to quantitatively assess spectral accuracy of AST_07XT and RefL1b data.Spectral Analyst matched more minerals correctly and had higher scores for the RefL1b data than for AST_07XT data. The radiance correction factors used in the RefL1b data corrected a low band 5 reflectance anomaly observed in the AST_07XT and AST_07 data but also produced anomalously high band 5 reflectance in RefL1b spectra with strong band 5 absorption for minerals, such as alunite. Thus, the band 5 anomaly seen in the RefL1b data cannot be corrected using additional gain adjustments. In addition, the use of concurrent MODIS water vapor data in the atmospheric correction of the RefL1b data produced datasets that had lower band 9 reflectance anomalies than the AST_07XT data. Although assessment of spectral data suggests that RefL1b data are more consistent and spectrally more correct than AST_07XT data, the Spectral Analyst results indicate that spectral discrimination between some minerals, such as alunite and kaolinite, are still not possible unless additional spectral calibration using site specific spectral data are performed. ?? 2010.

  3. A College That Relied on NeXT Computers Plans To Switch to Apple.

    ERIC Educational Resources Information Center

    Wilson, David L.

    1997-01-01

    Allegheny College (Pennsylvania), which uses NeXT computers, was dismayed when the technically superior operating system was orphaned but are now delighted that the company has been bought by Apple Computer and will make the operating system standard on Apple computers. The object-oriented operating system allows relatively unsophisticated users…

  4. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  5. Abnormal Positioning of Diencephalic Cell Types in Neocortical Tissue in the Dorsal Telencephalon of Mice Lacking Functional Gli3

    PubMed Central

    Fotaki, Vassiliki; Yu, Tian; Zaki, Paulette A.; Mason, John O.; Price, David J.

    2008-01-01

    The transcription factor Gli3 (glioma-associated oncogene homolog) is essential for normal development of the mammalian forebrain. One extreme requirement for Gli3 is at the dorsomedial telencephalon, which does not form in Gli3Xt/Xt mutant mice lacking functional Gli3. In this study, we analyzed expression of Gli3 in the wild-type telencephalon and observed a highdorsal-to-lowventral gradient of Gli3 expression and predominance of the cleaved form of the Gli3 protein dorsally. This graded expression correlates with the severedorsal-to-mildventral telencephalic phenotype observed in Gli3Xt/Xt mice. We characterized the abnormal joining of the telencephalon to the diencephalon and defined the medial limit of the dorsal telencephalon in Gli3Xt/Xt mice early in corticogenesis. Based on this analysis, we concluded that some of the abnormal expression of ventral telencephalic markers previously described as being in the dorsal telencephalon is, in fact, expression in adjacent diencephalic tissue, which expresses many of the same genes that mark the ventral telencephalon. We observed occasional cells with diencephalic character in the Foxg1 (forkhead box)-expressing Gli3Xt/Xt telencephalon at embryonic day 10.5, a day after the anatomical subdivision of the forebrain vesicle. Large clusters of such cells appear in the Gli3Xt/Xt neocortical region at later ages, when the neocortex becomes highly disorganized, forming rosettes comprising mainly neural progenitors. We propose that Gli3 is indispensable for formation of an intact telencephalic-diencephalic boundary and for preventing the abnormal positioning of diencephalic cells in the dorsal telencephalon. PMID:16957084

  6. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  7. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  8. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2018-01-16

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  9. Integration of CW / Radionucleotide Detection Systems to the Fido XT Explosives Detector

    DTIC Science & Technology

    2008-07-31

    explosives detected by the Fido XT. Additionally, a platform for centralized storage and processing of Fido XT data files collected in house, targeted...fused silica glass wool (obtained from Restek). The fluorescent signal was easily washed out of the flow cell by a nominal amount of buffer...detector with supporting NRE was processed . The Interceptor components were configured to operate under a Windows CE processor environment, and to

  10. Survey of new vector computers: The CRAY 1S from CRAY research; the CYBER 205 from CDC and the parallel computer from ICL - architecture and programming

    NASA Technical Reports Server (NTRS)

    Gentzsch, W.

    1982-01-01

    Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed.

  11. Multitasking and microtasking experience on the NA S Cray-2 and ACF Cray X-MP

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Farhad

    1987-01-01

    The fast Fourier transform (FFT) kernel of the NAS benchmark program has been utilized to experiment with the multitasking library on the Cray-2 and Cray X-MP/48, and microtasking directives on the Cray X-MP. Some performance figures are shown, and the state of multitasking software is described.

  12. Production Experiences with the Cray-Enabled TORQUE Resource Manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, Matthew A; Maxwell, Don E; Beer, David

    High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less

  13. Bonding brackets on white spot lesions pretreated by means of two methods.

    PubMed

    Vianna, Julia Sotero; Marquezan, Mariana; Lau, Thiago Chon Leon; Sant'Anna, Eduardo Franzotti

    2016-01-01

    The aim of this study was to evaluate the shear bond strength (SBS) of brackets bonded to demineralized enamel pretreated with low viscosity Icon Infiltrant resin (DMG) and glass ionomer cement (Clinpro XT Varnish, 3M Unitek) with and without aging. A total of 75 bovine enamel specimens were allocated into five groups (n = 15). Group 1 was the control group in which the enamel surface was not demineralized. In the other four groups, the surfaces were submitted to cariogenic challenge and white spot lesions were treated. Groups 2 and 3 were treated with Icon Infiltrant resin; Groups 4 and 5, with Clinpro XT Varnish. After treatment, Groups 3 and 5 were artificially aged. Brackets were bonded with Transbond XT adhesive system and SBS was evaluated by means of a universal testing machine. Statistical analysis was performed by one-way analysis of variance followed by Tukey post-hoc test. All groups tested presented shear bond strengths similar to or higher than the control group. Specimens of Group 4 had significantly higher shear bond strength values (p < 0.05) than the others. Pretreatment of white spot lesions, with or without aging, did not decrease the SBS of brackets.

  14. Dietary supplementation of young broiler chickens with Capsicum and turmeric oleoresins increases resistance to necrotic enteritis.

    PubMed

    Lee, Sung Hyen; Lillehoj, Hyun S; Jang, Seung I; Lillehoj, Erik P; Min, Wongi; Bravo, David M

    2013-09-14

    The Clostridium-related poultry disease, necrotic enteritis (NE), causes substantial economic losses on a global scale. In the present study, a mixture of two plant-derived phytonutrients, Capsicum oleoresin and turmeric oleoresin (XT), was evaluated for its effects on local and systemic immune responses using a co-infection model of experimental NE in commercial broilers. Chickens were fed from hatch with a diet supplemented with XT, or with a non-supplemented control diet, and either uninfected or orally challenged with virulent Eimeria maxima oocysts at 14 d and Clostridium perfringens at 18 d of age. Parameters of protective immunity were as follows: (1) body weight; (2) gut lesions; (3) serum levels of C. perfringens α-toxin and NE B-like (NetB) toxin; (4) serum levels of antibodies to α-toxin and NetB toxin; (5) levels of gene transcripts encoding pro-inflammatory cytokines and chemokines in the intestine and spleen. Infected chickens fed the XT-supplemented diet had increased body weight and reduced gut lesion scores compared with infected birds given the non-supplemented diet. The XT-fed group also displayed decreased serum α-toxin levels and reduced intestinal IL-8, lipopolysaccharide-induced TNF-α factor (LITAF), IL-17A and IL-17F mRNA levels, while cytokine/chemokine levels in splenocytes increased in the XT-fed group, compared with the animals fed the control diet. In conclusion, the present study documents the molecular and cellular immune changes following dietary supplementation with extracts of Capsicum and turmeric that may be relevant to protective immunity against avian NE.

  15. Hot Chips and Hot Interconnects for High End Computing Systems

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  16. Input-independent, Scalable and Fast String Matching on the Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Chavarría-Miranda, Daniel; Maschhoff, Kristyn J

    2009-05-25

    String searching is at the core of many security and network applications like search engines, intrusion detection systems, virus scanners and spam filters. The growing size of on-line content and the increasing wire speeds push the need for fast, and often real- time, string searching solutions. For these conditions, many software implementations (if not all) targeting conventional cache-based microprocessors do not perform well. They either exhibit overall low performance or exhibit highly variable performance depending on the types of inputs. For this reason, real-time state of the art solutions rely on the use of either custom hardware or Field-Programmable Gatemore » Arrays (FPGAs) at the expense of overall system flexibility and programmability. This paper presents a software based implementation of the Aho-Corasick string searching algorithm on the Cray XMT multithreaded shared memory machine. Our so- lution relies on the particular features of the XMT architecture and on several algorith- mic strategies: it is fast, scalable and its performance is virtually content-independent. On a 128-processor Cray XMT, it reaches a scanning speed of ≈ 28 Gbps with a performance variability below 10 %. In the 10 Gbps performance range, variability is below 2.5%. By comparison, an Intel dual-socket, 8-core system running at 2.66 GHz achieves a peak performance which varies from 500 Mbps to 10 Gbps depending on the type of input and dictionary size.« less

  17. Parallel processing on the Livermore VAX 11/780-4 parallel processor system with compatibility to Cray Research, Inc. (CRI) multitasking. Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, N.E.; Van Matre, S.W.

    1985-05-01

    This manual describes the CRI Subroutine Library and Utility Package. The CRI library provides Cray multitasking functionality on the four-processor shared memory VAX 11/780-4. Additional functionality has been added for more flexibility. A discussion of the library, utilities, error messages, and example programs is provided.

  18. Understanding the Cray X1 System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    2004-01-01

    This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform.

  19. Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.

    1989-01-01

    The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  20. Effect of thermal aging on the tensile bond strength at reduced areas of seven current adhesives.

    PubMed

    Baracco, Bruno; Fuentes, M Victoria; Garrido, Miguel A; González-López, Santiago; Ceballos, Laura

    2013-07-01

    The purpose of this study was to determine the micro-tensile bond strength (MTBS) to dentin of seven adhesive systems (total and self-etch adhesives) after 24 h and 5,000 thermocycles. Dentin surfaces of human third molars were exposed and bonded with two total-etch adhesives (Adper Scotchbond 1 XT and XP Bond), two two-step self-etch adhesives (Adper Scotchbond SE and Filtek Silorane Adhesive System) and three one-step self-etch adhesives (G-Bond, Xeno V and Bond Force). All adhesive systems were applied following manufacturers' instructions. Composite buildups were constructed and the bonded teeth were then stored in water (24 h, 37 °C) or thermocycled (5,000 cycles) before being sectioned and submitted to MTBS test. Two-way ANOVA and subsequent comparison tests were applied at α = 0.05. Characteristic de-bonded specimens were analyzed using scanning electron microscopy (SEM). After 24 h water storage, MTBS values were highest with XP Bond, Adper Scotchbond 1 XT, Filtek Silorane Adhesive System and Adper Scotchbond SE and lowest with the one-step self-etch adhesives Bond Force, Xeno V and G-Bond. After thermocycling, MTBS values were highest with XP Bond, followed by Filtek Silorane Adhesive System, Adper Scotchbond SE and Adper Scotchbond 1 XT and lowest with the one-step self-etch adhesives Bond Force, Xeno V and G-Bond. Thermal aging induced a significant decrease in MTBS values with all adhesives tested. The resistance of resin-dentin bonds to thermal-aging degradation was material dependent. One-step self-etch adhesives obtained the lowest MTBS results after both aging treatments, and their adhesive capacity was significantly reduced after thermocycling.

  1. Software and Systems Test Track Architecture and Concept Definition

    DTIC Science & Technology

    2007-05-01

    Light 11.0 11.0 11.0 ASC Flex Free Software Foundation 2.5.31 2.5.31 2.5.31 ASC Fluent Fluent Inc. 6.2.26 6.2.26 6.2.26 6.2.26 ASC FMD ...11 ERDC Fluent Fluent 6.2.16 ERDC Fortran 77/90 compiler Compaq/Cray/SGI 7.4 7.4.3m 7.4.4m 5.6 ERDC FTA Platform 1.1 1.1 1.1 ERDC GAMESS

  2. CLIPS on the NeXT computer

    NASA Technical Reports Server (NTRS)

    Charnock, Elizabeth; Eng, Norman

    1990-01-01

    This paper discusses the integration of CLIPS into a hybrid expert system neural network AI tool for the NeXT computer. The main discussion is devoted to the joining of these two AI paradigms in a mutually beneficial relationship. We conclude that expert systems and neural networks should not be considered as competing AI implementation methods, but rather as complimentary components of a whole.

  3. Fast Numerical Methods for Stochastic Partial Differential Equations

    DTIC Science & Technology

    2016-04-15

    analysis we first derived a system of forward and backward SDEs (BSDEs) for (Xt, Qt, Zt){ dXs = b( Xs )dt+ σsdWs, Xt = x, t < s < T, (SDE) dQs = ZsdWs...g( Xs )QsdVs, QT = Φ(XT ). (BSDE) (6) Here Wt and Vt are two independent Brownian motions. The first equation in (6) is a forward SDE while the second...first order scheme for a general coupled system of forward-backward SDEs [1]: dXs = b( Xs )ds+ σ( Xs )dWs, t ≤ s ≤ T, dYs = +f(s, Xs , Ys)ds +g(s

  4. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  5. Structure determination of the extracellular xylanase from Geobacillus stearothermophilus by selenomethionyl MAD phasing.

    PubMed

    Teplitsky, A; Mechaly, A; Stojanoff, V; Sainz, G; Golan, G; Feinberg, H; Gilboa, R; Reiland, V; Zolotnitsky, G; Shallom, D; Thompson, A; Shoham, Y; Shoham, G

    2004-05-01

    Xylanases are hemicellulases that hydrolyze the internal beta-1,4-glycoside bonds of xylan. The extracellular thermostable endo-1,4-beta-xylanase (EC 3.2.1.8; XT6) produced by the thermophilic bacterium Geobacillus stearothermophilus T-6 was shown to bleach pulp optimally at pH 9 and 338 K and was successfully used in a large-scale biobleaching mill trial. The xylanase gene was cloned and sequenced. The mature enzyme consists of 379 amino acids, with a calculated molecular weight of 43 808 Da and a pI of 9.0. Crystallographic studies of XT6 were performed in order to study the mechanism of catalysis and to provide a structural basis for the rational introduction of enhanced thermostability by site-specific mutagenesis. XT6 was crystallized in the primitive trigonal space group P3(2)21, with unit-cell parameters a = b = 112.9, c = 122.7 A. A full diffraction data set for wild-type XT6 has been measured to 2.4 A resolution on flash-frozen crystals using synchrotron radiation. A fully exchanged selenomethionyl XT6 derivative (containing eight Se atoms per XT6 molecule) was also prepared and crystallized in an isomorphous crystal form, providing full selenium MAD data at three wavelengths and enabling phase solution and structure determination. The structure of wild-type XT6 was refined at 2.4 A resolution to a final R factor of 15.6% and an R(free) of 18.6%. The structure demonstrates that XT6 is made up of an eightfold TIM-barrel containing a deep active-site groove, consistent with its 'endo' mode of action. The two essential catalytic carboxylic residues (Glu159 and Glu265) are located at the active site within 5.5 A of each other, as expected for 'retaining' glycoside hydrolases. A unique subdomain was identified in the carboxy-terminal part of the enzyme and was suggested to have a role in xylan binding. The three-dimensional structure of XT6 is of great interest since it provides a favourable starting point for the rational improvement of its already high thermal and pH stabilities, which are required for a number of biotechnological and industrial applications.

  6. Enviro-HIRLAM Applicability for Black Carbon Studies in Arctic

    NASA Astrophysics Data System (ADS)

    Nuterman, Roman; Mahura, Alexander; Baklanov, Alexander; Kurganskiy, Alexander; Amstrup, Bjarne; Kaas, Eigil

    2015-04-01

    One of the main aims of the Nordic CarboNord project ("Impact of black carbon on air quality and climate in Northern Europe and Arctic") is focused on providing new information on distribution and effects of black carbon in Northern Europe and Arctic. It can be done through assessing robustness of model predictions of long-range black carbon distribution and its relation to climate change and forcing. In our study, the online integrated meteorology-chemistry/aerosols model - Enviro-HIRLAM (Environment - HIgh Resolution Limited Area Model) - is used. This study, at first, is focused on adaptation (model setup, domain for the Northern Hemisphere and Arctic region, emissions, boundary conditions, refining aerosols microphysics and chemistry, cloud-aerosol interaction processes) of Enviro-HIRLAM model and selection of most unfavorable weather and air pollution episodes for the Arctic region. Simulations of interactions between black carbon and meteorological processes in northern conditions for selected episodes will be performed (at DMI's supercomputer HPC CRAY-XT5), and then long-term simulations at regional scale for selected winter vs. summer months. Modelling results will be compared on a diurnal cycle and monthly basis against observations for key meteorological parameters (such as air temperature, wind speed, relative humidity, and precipitation) as well as aerosol concentration. Finally, evaluation of black carbon atmospheric transport, dispersion, and deposition patterns at different spatio-temporal scales; physical-chemical processes and transformations of black carbon containing aerosols; and interactions and effects between black carbon and meteorological processes in Arctic weather conditions will be done.

  7. Effect of Accelerated Artificial Aging on Translucency of Methacrylate and Silorane-Based Composite Resins.

    PubMed

    Shirinzad, Mehdi; Rezaei-Soufi, Loghman; Mirtorabi, Maryam Sadat; Vahdatinia, Farshid

    2016-03-01

    Composite restorations must have tooth-like optical properties namely color and translucency and maintain them for a long time. This study aimed to compare the effect of accelerated artificial aging (AAA) on the translucency of three methacrylate-based composites (Filtek Z250, Filtek Z250XT and Filtek Z350XT) and one silorane-based composite resin (Filtek P90). For this in vitro study, 56 composite discs were fabricated (n=14 for each group). Using scanning spectrophotometer, CIE L*a*b* parameters and translucency of each specimen were measured at 24 hours and after AAA for 384 hours. Data were analyzed using one-way ANOVA, Tukey's test and paired t-test at P=0.05 level of significance. The mean (±standard deviation) translucency parameter for Filtek Z250, Filtek Z250XT, Filtek Z350XT and Filtek P90 was 5.67±0.64, 4.59±0.77, 7.87±0.82 and 4.21±0.71 before AAA and 4.25±0.615, 3.53±0.73, 5.94±0.57 and 4.12±0.54 after AAA, respectively. After aging, the translucency of methacrylate-based composites decreased significantly (P<0.05). However, the translucency of Filtek P90 did not change significantly (P>0.05). The AAA significantly decreased the translucency of methacrylate-based composites (Filtek Z250, Filtek Z250XT and Filtek Z350XT) but no change occurred in the translucency of Filtek P90 silorane-based composite.

  8. Effect of Accelerated Artificial Aging on Translucency of Methacrylate and Silorane-Based Composite Resins

    PubMed Central

    Shirinzad, Mehdi; Rezaei-Soufi, Loghman; Mirtorabi, Maryam Sadat; Vahdatinia, Farshid

    2016-01-01

    Objectives: Composite restorations must have tooth-like optical properties namely color and translucency and maintain them for a long time. This study aimed to compare the effect of accelerated artificial aging (AAA) on the translucency of three methacrylate-based composites (Filtek Z250, Filtek Z250XT and Filtek Z350XT) and one silorane-based composite resin (Filtek P90). Materials and Methods: For this in vitro study, 56 composite discs were fabricated (n=14 for each group). Using scanning spectrophotometer, CIE L*a*b* parameters and translucency of each specimen were measured at 24 hours and after AAA for 384 hours. Data were analyzed using one-way ANOVA, Tukey's test and paired t-test at P=0.05 level of significance. Results: The mean (±standard deviation) translucency parameter for Filtek Z250, Filtek Z250XT, Filtek Z350XT and Filtek P90 was 5.67±0.64, 4.59±0.77, 7.87±0.82 and 4.21±0.71 before AAA and 4.25±0.615, 3.53±0.73, 5.94±0.57 and 4.12±0.54 after AAA, respectively. After aging, the translucency of methacrylate-based composites decreased significantly (P<0.05). However, the translucency of Filtek P90 did not change significantly (P>0.05). Conclusions: The AAA significantly decreased the translucency of methacrylate-based composites (Filtek Z250, Filtek Z250XT and Filtek Z350XT) but no change occurred in the translucency of Filtek P90 silorane-based composite. PMID:27928237

  9. Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudheer, C. D.; Krishnan, S.; Srinivasan, A.

    Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receivingmore » at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.« less

  10. The CSM testbed software system: A development environment for structural analysis methods on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Gillian, Ronnie E.; Lotts, Christine G.

    1988-01-01

    The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.

  11. Applications Performance Under MPL and MPI on NAS IBM SP2

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.

  12. The Spider Center Wide File System; From Concept to Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shipman, Galen M; Dillow, David A; Oral, H Sarp

    2009-01-01

    The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less

  13. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications

    PubMed Central

    Asif, Rameez

    2016-01-01

    Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to −11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381

  14. Development of a CRAY 1 version of the SINDA program. [thermo-structural analyzer program

    NASA Technical Reports Server (NTRS)

    Juba, S. M.; Fogerson, P. E.

    1982-01-01

    The SINDA thermal analyzer program was transferred from the UNIVAC 1110 computer to a CYBER And then to a CRAY 1. Significant changes to the code of the program were required in order to execute efficiently on the CYBER and CRAY. The program was tested on the CRAY using a thermal math model of the shuttle which was too large to run on either the UNIVAC or CYBER. An effort was then begun to further modify the code of SINDA in order to make effective use of the vector capabilities of the CRAY.

  15. Introducing Argonne’s Theta Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Theta, the Argonne Leadership Computing Facility’s (ALCF) new Intel-Cray supercomputer, is officially open to the research community. Theta’s massively parallel, many-core architecture puts the ALCF on the path to Aurora, the facility’s future Intel-Cray system. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

  16. PAN AIR: A computer program for predicting subsonic or supersonic linear potential flows about arbitrary configurations using a higher order panel method. Volume 4: Maintenance document (version 3.0)

    NASA Technical Reports Server (NTRS)

    Purdon, David J.; Baruah, Pranab K.; Bussoletti, John E.; Epton, Michael A.; Massena, William A.; Nelson, Franklin D.; Tsurusaki, Kiyoharu

    1990-01-01

    The Maintenance Document Version 3.0 is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the overall system and each program module of the system. Sufficient detail is given for program maintenance, updating, and modification. It is assumed that the reader is familiar with programming and CRAY computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few CAL language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are COS 1.11, COS 1.12, COS 1.13, and COS 1.14 on the CRAY 1S, 1M, and X-MP computing systems. The system is comprised of a data base management system, a program library, an execution control module, and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a set of CRAY procedures (PAPROCS) was created to automatically supply most of the JCL cards. Most of this document has not changed for Version 3.0. It now, however, strictly applies only to PAN AIR version 3.0. The major changes are: (1) additional sections covering the new FDP module (which calculates streamlines and offbody points); (2) a complete rewrite of the section on the MAG module; and (3) strict applicability to CRAY computing systems.

  17. New computing systems and their impact on structural analysis and design

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.

  18. A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gittens, Alex; Kottalam, Jey; Yang, Jiyan

    We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less

  19. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  20. Kinetics and mechanism for platination of thione-containing nucleotides and oligonucleotides: evaluation of the salt dependence.

    PubMed

    Kjellström, Johan; Elmroth, Sofi K C

    2003-01-01

    Reactions of cis-[PtCl(NH(3))(CyNH(2))(OH(2))](+) (Cy=cyclohexyl) with thione-containing single-stranded oligonucleotides d(T(8)XT(8)) and d(XT(16)) (X=(s6)I or (s4)U) and the mononucleotides 4-thiouridine ((s4)UMP) and 6-mercaptoinosine ((s6)IMP) have been studied in aqueous solution at pH 4.1. The reaction kinetics was followed using HPLC methodology as a function of ionic strength in the interval 5.0 mM

  1. A leap forward with UTK s Cray XC30

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahey, Mark R

    2014-01-01

    This paper shows a significant productivity leap for several science groups and the accomplishments they have made to date on Darter - a Cray XC30 at the University of Tennessee Knoxville. The increased productivity is due to faster processors and interconnect combined in a new generation from Cray, and yet it still has a very similar programming environment as compared to previous generations of Cray machines that makes porting easy.

  2. A performance comparison of current HPC systems: Blue Gene/Q, Cray XE6 and InfiniBand systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav

    2014-01-01

    We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the waymore » in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less

  3. Global Precipitation Estimates from Cross-Track Passive Microwave Observations Using a Physically-Based Retrieval Scheme

    NASA Technical Reports Server (NTRS)

    Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave

    2015-01-01

    The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.

  4. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  5. Parallel computation in a three-dimensional elastic-plastic finite-element analysis

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Bigelow, C. A.; Newman, J. C., Jr.

    1992-01-01

    A CRAY parallel processing technique called autotasking was implemented in a three-dimensional elasto-plastic finite-element code. The technique was evaluated on two CRAY supercomputers, a CRAY 2 and a CRAY Y-MP. Autotasking was implemented in all major portions of the code, except the matrix equations solver. Compiler directives alone were not able to properly multitask the code; user-inserted directives were required to achieve better performance. It was noted that the connect time, rather than wall-clock time, was more appropriate to determine speedup in multiuser environments. For a typical example problem, a speedup of 2.1 (1.8 when the solution time was included) was achieved in a dedicated environment and 1.7 (1.6 with solution time) in a multiuser environment on a four-processor CRAY 2 supercomputer. The speedup on a three-processor CRAY Y-MP was about 2.4 (2.0 with solution time) in a multiuser environment.

  6. On Wave and Entropic Amplitudes in Maxwellian Materials.

    DTIC Science & Technology

    1982-09-01

    drawbacks. Finally, in the last of a major series of papers on the propagation and growth "" behavior of waves in materials with memory, Coleman, Greenberg ...equations to calculate xa T 1C. As in Coleman, Greenberg , and Gurtin [5], we may then replace [at a xT 1 by [ at T]. Now, at all points (X,t) (Y(t),t...behavior of the amplitude, a(t), is based on this fact and the continuity of I(t). Following Coleman, Greenberg , and Gurtin [5] we set (5.8) (t ) -1/I(t ) 0

  7. Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Partridge, Harry

    1987-01-01

    Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.

  8. Efficient multitasking of Choleski matrix factorization on CRAY supercomputers

    NASA Technical Reports Server (NTRS)

    Overman, Andrea L.; Poole, Eugene L.

    1991-01-01

    A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.

  9. Application of high-performance computing to numerical simulation of human movement

    NASA Technical Reports Server (NTRS)

    Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.

    1995-01-01

    We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.

  10. Parallel particle filters for online identification of mechanistic mathematical models of physiology from monitoring data: performance and real-time scalability in simulation scenarios.

    PubMed

    Zenker, Sven

    2010-08-01

    Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.

  11. Infrared Measurement Variability Analysis.

    DTIC Science & Technology

    1980-09-01

    collecting optics of the measurement system. The first equation for tile blackbody experiment has the form 4.0 pim _ Ae W ,T) r(X,D) 3.5 pm - 4.0 pm JrD2 f3.5...potential for noise reduction by identifying and reducing contributing system effects. The measurement variance ott . of an infinite population of possible...irradiance can be written 4.0 pm I r()A A+ A ) 2 4.0 X C1(, = W(XT + AT)d 3.5 pim I since c + Af =2 r +Ar I Using the two expressions juSt devclopCd

  12. Nonlinear Filtering and Approximation Techniques

    DTIC Science & Technology

    1991-09-01

    filtering. UNIT8 Q RECERCE**No 1223 Programme 5 A utomatique, Productique, Traitement dui Signal et des Donnc~es CONSISTENT PARAMETER ESTIMATION FOR...ue’e[71 E C 2.’(Rm x [0,7]; R) is the unique solution of the Hamilton-Jacobi-Bellman equation 9u,’[7](x, t) - EAu "’[ 7](x,t) + He,’[ 7](x,t,Du,[ 7](x,t

  13. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  14. Basic JCL for the CRAY-1 operating system (COS) with emphasis on making the transition from CDC 7600/SCOPE

    NASA Technical Reports Server (NTRS)

    Howe, G.; Saunders, D.

    1983-01-01

    Users of the CDC 7600 at Ames are assisted in making the transition to the CRAY-1. Similarities and differences in the basic JCL are summarized, and a dozen or so examples of typical batch jobs for the two systems are shown in parallel. Some changes to look for in FORTRAN programs and in the use of UPDATE are also indicated. No attempt is made to cover magnetic tape handling. The material here should not be considered a substitute for reading the more conventional manuals or the User's Guide for the Advanced Computational Facility, available from the Computer Information Center.

  15. A comparison of the Cray-2 performance before and after the installation of memory pseudo-banking

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald D.; Bailey, David H.

    1987-01-01

    A suite of 13 large Fortran benchmark codes were run on a Cray-2 configured with memory pseudo-banking circuits, and floating point operation rates were measured for each under a variety of system load configurations. These were compared with similar flop measurements taken on the same system before installation of the pseudo-banking. A useful memory access efficiency parameter was defined and calculated for both sets of performance rates, allowing a crude quantitative measure of the improvement in efficiency due to pseudo-banking. Programs were categorized as either highly scalar (S) or highly vectorized (V) and either memory-intensive or register-intensive, giving 4 categories: S-memory, S-register, V-memory, and V-register. Using flop rates as a simple quantifier of these 4 categories, a scatter plot of efficiency gain vs Mflops roughly illustrates the improvement in floating point processing speed due to pseudo-banking. On the Cray-2 system tested this improvement ranged from 1 percent for S-memory codes to about 12 percent for V-memory codes. No significant gains were made for V-register codes, which was to be expected.

  16. Conduction Abnormalities and Pacemaker Implantations After SAPIEN 3 Vs SAPIEN XT Prosthesis Aortic Valve Implantation.

    PubMed

    Husser, Oliver; Kessler, Thorsten; Burgdorf, Christof; Templin, Christian; Pellegrini, Costanza; Schneider, Simon; Kasel, Albert Markus; Kastrati, Adnan; Schunkert, Heribert; Hengstenberg, Christian

    2016-02-01

    Transcatheter aortic valve implantation is increasingly used in patients with aortic stenosis. Post-procedural intraventricular conduction abnormalities and permanent pacemaker implantations remain a serious concern. Recently, the Edwards SAPIEN 3 prosthesis has replaced the SAPIEN XT. We sought to determine the incidences of new-onset intraventricular conduction abnormalities and permanent pacemaker implantations by comparing the 2 devices. We analyzed the last consecutive 103 patients undergoing transcatheter aortic valve implantation with SAPIEN XT before SAPIEN 3 was used in the next 105 patients. To analyze permanent pacemaker implantations and new-onset intraventricular conduction abnormalities, patients with these conditions at baseline were excluded. Electrocardiograms were recorded at baseline, after the procedure, and before discharge. SAPIEN 3 was associated with higher device success (100% vs 92%; P=.005) and less paravalvular leakage (0% vs 7%; P<.001). The incidence of permanent pacemaker implantations was 12.6% (23 of 183) with no difference between the 2 groups (SAPIEN 3: 12.5% [12 of 96] vs SAPIEN XT: 12.6% [11 of 87]; P=.99). SAPIEN 3 was associated with a higher rate of new-onset intraventricular conduction abnormalities (49% vs 27%; P=.007) due to a higher rate of fascicular blocks (17% vs 5%; P=.021). There was no statistically significant difference in transient (29% [20 of 69] vs persistent 19% [12 of 64]; P=.168) left bundle branch blocks (28% [19 of 69] vs 17% [11 of 64]; P=.154) when SAPIEN 3 was compared with SAPIEN XT. We found a trend toward a higher rate of new-onset intraventricular conduction abnormalities with SAPIEN 3 compared with SAPIEN XT, although this did not result in a higher permanent pacemaker implantation rate. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  17. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less

  18. The ASC Sequoia Programming Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M

    2008-08-06

    In the late 1980's and early 1990's, Lawrence Livermore National Laboratory was deeply engrossed in determining the next generation programming model for the Integrated Design Codes (IDC) beyond vectorization for the Cray 1s series of computers. The vector model, developed in mid 1970's first for the CDC 7600 and later extended from stack based vector operation to memory to memory operations for the Cray 1s, lasted approximately 20 years (See Slide 5). The Cray vector era was deemed an extremely long lived era as it allowed vector codes to be developed over time (the Cray 1s were faster in scalarmore » mode than the CDC 7600) with vector unit utilization increasing incrementally over time. The other attributes of the Cray vector era at LLNL were that we developed, supported and maintained the Operating System (LTSS and later NLTSS), communications protocols (LINCS), Compilers (Civic Fortran77 and Model), operating system tools (e.g., batch system, job control scripting, loaders, debuggers, editors, graphics utilities, you name it) and math and highly machine optimized libraries (e.g., SLATEC, and STACKLIB). Although LTSS was adopted by Cray for early system generations, they later developed COS and UNICOS operating systems and environment on their own. In the late 1970s and early 1980s two trends appeared that made the Cray vector programming model (described above including both the hardware and system software aspects) seem potentially dated and slated for major revision. These trends were the appearance of low cost CMOS microprocessors and their attendant, departmental and mini-computers and later workstations and personal computers. With the wide spread adoption of Unix in the early 1980s, it appeared that LLNL (and the other DOE Labs) would be left out of the mainstream of computing without a rapid transition to these 'Killer Micros' and modern OS and tools environments. The other interesting advance in the period is that systems were being developed with multiple 'cores' in them and called Symmetric Multi-Processor or Shared Memory Processor (SMP) systems. The parallel revolution had begun. The Laboratory started a small 'parallel processing project' in 1983 to study the new technology and its application to scientific computing with four people: Tim Axelrod, Pete Eltgroth, Paul Dubois and Mark Seager. Two years later, Eugene Brooks joined the team. This team focused on Unix and 'killer micro' SMPs. Indeed, Eugene Brooks was credited with coming up with the 'Killer Micro' term. After several generations of SMP platforms (e.g., Sequent Balance 8000 with 8 33MHz MC32032s, Allian FX8 with 8 MC68020 and FPGA based Vector Units and finally the BB&N Butterfly with 128 cores), it became apparent to us that the killer micro revolution would indeed take over Crays and that we definitely needed a new programming and systems model. The model developed by Mark Seager and Dale Nielsen focused on both the system aspects (Slide 3) and the code development aspects (Slide 4). Although now succinctly captured in two attached slides, at the time there was tremendous ferment in the research community as to what parallel programming model would emerge, dominate and survive. In addition, we wanted a model that would provide portability between platforms of a single generation but also longevity over multiple--and hopefully--many generations. Only after we developed the 'Livermore Model' and worked it out in considerable detail did it become obvious that what we came up with was the right approach. In a nutshell, the applications programming model of the Livermore Model posited that SMP parallelism would ultimately not scale indefinitely and one would have to bite the bullet and implement MPI parallelism within the Integrated Design Code (IDC). We also had a major emphasis on doing everything in a completely standards based, portable methodology with POSIX/Unix as the target environment. We decided against specialized libraries like STACKLIB for performance, but kept as many general purpose, portable math libraries as were needed by the codes. Third, we assumed that the SMPs in clusters would evolve in time to become more powerful, feature rich and, in particular, offer more cores. Thus, we focused on OpenMP, and POSIX PThreads for programming SMP parallelism. These code porting efforts were lead by Dale Nielsen, A-Division code group leader, and Randy Christensen, B-Division code group leader. Most of the porting effort revolved removing 'Crayisms' in the codes: artifacts of LTSS/NLTSS, Civic compiler extensions beyond Fortran77, IO libraries and dealing with new code control languages (we switched to Perl and later to Python). Adding MPI to the codes was initially problematic and error prone because the programmers used MPI directly and sprinkled the calls throughout the code.« less

  19. Barrier-breaking performance for industrial problems on the CRAY C916

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graffunder, S.K.

    1993-12-31

    Nine applications, including third-party codes, were submitted to the Gordon Bell Prize committee showing the CRAY C916 supercomputer providing record-breaking time to solution for industrial problems in several disciplines. Performance was obtained by balancing raw hardware speed; effective use of large, real, shared memory; compiler vectorization and autotasking; hand optimization; asynchronous I/O techniques; and new algorithms. The highest GFLOPS performance for the submissions was 11.1 GFLOPS out of a peak advertised performance of 16 GFLOPS for the CRAY C916 system. One program achieved a 15.45 speedup from the compiler with just two hand-inserted directives to scope variables properly for themore » mathematical library. New I/O techniques hide tens of gigabytes of I/O behind parallel computations. Finally, new iterative solver algorithms have demonstrated times to solution on 1 CPU as high as 70 times faster than the best direct solvers.« less

  20. Diffuse-interface polycrystal plasticity: expressing grain boundaries as geometrically necessary dislocations

    NASA Astrophysics Data System (ADS)

    Admal, Nikhil Chandra; Po, Giacomo; Marian, Jaime

    2017-12-01

    The standard way of modeling plasticity in polycrystals is by using the crystal plasticity model for single crystals in each grain, and imposing suitable traction and slip boundary conditions across grain boundaries. In this fashion, the system is modeled as a collection of boundary-value problems with matching boundary conditions. In this paper, we develop a diffuse-interface crystal plasticity model for polycrystalline materials that results in a single boundary-value problem with a single crystal as the reference configuration. Using a multiplicative decomposition of the deformation gradient into lattice and plastic parts, i.e. F( X,t)= F L( X,t) F P( X,t), an initial stress-free polycrystal is constructed by imposing F L to be a piecewise constant rotation field R 0( X), and F P= R 0( X)T, thereby having F( X,0)= I, and zero elastic strain. This model serves as a precursor to higher order crystal plasticity models with grain boundary energy and evolution.

  1. Dynamic Stability of Structures: Application to Frames, Cylindrical Shells and Other Systems.

    DTIC Science & Technology

    1982-02-01

    L0G XNX= XNXX 102. XNI:XNXX 102 GO TU 7-* 103 72 wRIkL(6,52)YFr4Xv XPKE.SONXvACCUR 104. XNXZXPRE. LOS XN11=XFRES 106 GO To 74. 107 73 WRITLC,5.3)Xt4X...YFviq~~ T19MAXvI,£LPRIN4TwwXNPvLP) 146 IF (iNxxF.xE.4)xPt~s=xWI IF (1NXXPX.tJE.4.) XNXX =XtAP 444 IDcz.T=IOT 147 IF (IsSF.tjE. )GO TO Z14 DC 213 :i1...T.IN& i-3T3(POT,PCrMq Sr TR TA qi~ IP Sv. ISA, XNXX 1263 C POT - F)TEN1TLAL EOi)RGY 1264. C STRY - U;JI TEND SHORTENIN.G FOR~ Yz0 * 1265 C STRA - .V-k-Gc

  2. Early outcomes of percutaneous pulmonary valve implantation using the Edwards SAPIEN XT transcatheter heart valve system.

    PubMed

    Haas, Nikolaus A; Carere, Ronald Giacomo; Kretschmar, Oliver; Horlick, Eric; Rodés-Cabau, Josep; de Wolf, Daniël; Gewillig, Marc; Mullen, Michael; Lehner, Anja; Deutsch, Cornelia; Bramlage, Peter; Ewert, Peter

    2018-01-01

    Patients with congenital or acquired heart defects affecting the pulmonary valve and right ventricular outflow tract (RVOT) commonly require multiple surgical interventions, resulting in significant morbidity. A less invasive alternative is percutaneous pulmonary valve implantation (PPVI). Though studies have previously reported the safety and efficacy of the early generation transcatheter heart valves (THVs), data on more recent devices are severely lacking. We performed a multinational, multicentre, retrospective, observational registry analysis of patients who underwent PPVI using the Edwards SAPIEN XT THV. Of the 46 patients that were enrolled, the majority had tetralogy of Fallot as the underlying diagnosis (58.7%), and stentless xenograft as the most common RVOT anatomy (34.8%). Procedural success rate was high (93.5%), with a low frequency of periprocedural complications and adverse events (6.5% and 10.9%, respectively). At 30days post-procedure, NYHA class had improved significantly (90.6% were at NYHA I or II). The rate of moderate/severe pulmonary regurgitation had decreased from 76.1% at baseline to 5.0% at 30days, and the calculated peak systolic gradient had decreased from 45.2 (SD±21.3) mmHg to 16.4 (SD±8.0) mmHg, with these values remaining low up to 2years. The data suggest the efficacy and safety of the SAPIEN XT THV in PPVI in common anatomies in patients with conduits, as well as those with native pulmonary valves or transannular patches. Continued data collection is necessary to verify long-term findings. CLINICALTRIALS. NCT02302131. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. SEI Software Engineering Education Directory.

    DTIC Science & Technology

    1987-02-01

    Software Design and Development Gilbert. Philip Systems: CDC Cyber 170/750 CDC Cyber 170760 DEC POP 11/44 PRIME AT&T 3B5 IBM PC IBM XT IBM RT...Macintosh VAx 8300 Software System Development and Laboratory CS 480/480L U P X T Textbooks: Software Design and Development Gilbert, Philip Systems: CDC...Acting Chair (618) 692-2386 Courses: Software Design and Development CS 424 U P E Y Textbooks: Software Design and Development, Gilbert, Philip Topics

  4. Accuracy of the Garmin 920 XT HRM to perform HRV analysis.

    PubMed

    Cassirame, Johan; Vanhaesebrouck, Romain; Chevrolat, Simon; Mourot, Laurent

    2017-12-01

    Heart rate variability (HRV) analysis is widely used to investigate autonomous cardiac drive. This method requires periodogram measurement, which can be obtained by an electrocardiogram (ECG) or from a heart rate monitor (HRM), e.g. the Garmin 920 XT device. The purpose of this investigation was to assess the accuracy of RR time series measurements from a Garmin 920 XT HRM as compared to a standard ECG, and to verify whether the measurements thus obtained are suitable for HRV analysis. RR time series were collected simultaneously with an ECG (Powerlab system, AD Instruments, Castell Hill, Australia) and a Garmin XT 920 in 11 healthy subjects during three conditions, namely in the supine position, the standing position and during moderate exercise. In a first step, we compared RR time series obtained with both tools using the Bland and Altman method to obtain the limits of agreement in all three conditions. In a second step, we compared the results of HRV analysis between the ECG RR time series and Garmin 920 XT series. Results show that the accuracy of this system is in accordance with the literature in terms of the limits of agreement. In the supine position, bias was 0.01, - 2.24, + 2.26 ms; in the standing position, - 0.01, - 3.12, + 3.11 ms respectively, and during exercise, - 0.01, - 4.43 and + 4.40 ms. Regarding HRV analysis, we did not find any difference for HRV analysis in the supine position, but the standing and exercise conditions both showed small modifications.

  5. A Performance Evaluation of the Cray X1 for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David

    2004-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements.

  6. CSM Testbed Development and Large-Scale Structural Applications

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.

    1989-01-01

    A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  7. System Accuracy Evaluation of Four Systems for Self-Monitoring of Blood Glucose Following ISO 15197 Using a Glucose Oxidase and a Hexokinase-Based Comparison Method.

    PubMed

    Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido

    2015-04-14

    The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.

  8. An overview of the Progenika ID CORE XT: an automated genotyping platform based on a fluidic microarray system.

    PubMed

    Goldman, Mindy; Núria, Núria; Castilho, Lilian M

    2015-01-01

    Automated testing platforms facilitate the introduction of red cell genotyping of patients and blood donors. Fluidic microarray systems, such as Luminex XMAP (Austin, TX), are used in many clinical applications, including HLA and HPA typing. The Progenika ID CORE XT (Progenika Biopharma-Grifols, Bizkaia, Spain) uses this platform to analyze 29 polymorphisms determining 37 antigens in 10 blood group systems. Once DNA has been extracted, processing time is approximately 4 hours. The system is highly automated and includes integrated analysis software that produces a file and a report with genotype and predicted phenotype results.

  9. Accuracy Assessment for the Auxillary Tracking System

    DTIC Science & Technology

    1991-09-01

    Auxiliary Tracking System (ATS), paper prepared for evaluation of ATS design review, 28 June, 1990. Anton , H., and Rorres, C., Elementary Linear Algebra with...are linearized around the trial value (XT6, YT,, Zro), shown in Equation 3.16, where 10 means evaluated at point "o". The OR1aI * YaT- I T.) ZR (ZT...3.16) partial derivatives are listed in Equations 3.17 through 3.19. 8XR.[ o 7.-OR1 [ .XT-XL (3.17) aOR., ZTo-Z 1 (3.19) aZTIo R. The linearized

  10. Zonal Rate Model for Axial and Radial Flow Membrane Chromatography. Part I: Knowledge Transfer Across Operating Conditions and Scales

    PubMed Central

    Ghosh, Pranay; Vahedipour, Kaveh; Lin, Min; Vogel, Jens H; Haynes, Charles A; von Lieres, Eric

    2013-01-01

    The zonal rate model (ZRM) has previously been applied for analyzing the performance of axial flow membrane chromatography capsules by independently determining the impacts of flow and binding related non-idealities on measured breakthrough curves. In the present study, the ZRM is extended to radial flow configurations, which are commonly used at larger scales. The axial flow XT5 capsule and the radial flow XT140 capsule from Pall are rigorously analyzed under binding and non-binding conditions with bovine serum albumin (BSA) as test molecule. The binding data of this molecule is much better reproduced by the spreading model, which hypothesizes different binding orientations, than by the well-known Langmuir model. Moreover, a revised cleaning protocol with NaCl instead of NaOH and minimizing the storage time has been identified as most critical for quantitatively reproducing the measured breakthrough curves. The internal geometry of both capsules is visualized by magnetic resonance imaging (MRI). The flow in the external hold-up volumes of the XT140 capsule was found to be more homogeneous as in the previously studied XT5 capsule. An attempt for model-based scale-up was apparently impeded by irregular pleat structures in the used XT140 capsule, which might lead to local variations in the linear velocity through the membrane stack. However, the presented approach is universal and can be applied to different capsules. The ZRM is shown to potentially help save valuable material and time, as the experiments required for model calibration are much cheaper than the predicted large-scale experiment at binding conditions. Biotechnol. Bioeng. 2013; 110: 1129–1141. © 2012 Wiley Periodicals, Inc. PMID:23097218

  11. Molecular Cloning and Functional Characterization of Xenopus tropicalis Frog Transient Receptor Potential Vanilloid 1 Reveal Its Functional Evolution for Heat, Acid, and Capsaicin Sensitivities in Terrestrial Vertebrates*

    PubMed Central

    Ohkita, Masashi; Saito, Shigeru; Imagawa, Toshiaki; Takahashi, Kenji; Tominaga, Makoto; Ohta, Toshio

    2012-01-01

    The functional difference of thermosensitive transient receptor potential (TRP) channels in the evolutionary context has attracted attention, but thus far little information is available on the TRP vanilloid 1 (TRPV1) function of amphibians, which diverged earliest from terrestrial vertebrate lineages. In this study we cloned Xenopus tropicalis frog TRPV1 (xtTRPV1), and functional characterization was performed using HeLa cells heterologously expressing xtTRPV1 (xtTRPV1-HeLa) and dorsal root ganglion neurons isolated from X. tropicalis (xtDRG neurons) by measuring changes in the intracellular calcium concentration ([Ca2+]i). The channel activity was also observed in xtTRPV1-expressing Xenopus oocytes. Furthermore, we tested capsaicin- and heat-induced nocifensive behaviors of the frog X. tropicalis in vivo. At the amino acid level, xtTRPV1 displays ∼60% sequence identity to other terrestrial vertebrate TRPV1 orthologues. Capsaicin induced [Ca2+]i increases in xtTRPV1-HeLa and xtDRG neurons and evoked nocifensive behavior in X. tropicalis. However, its sensitivity was extremely low compared with mammalian orthologues. Low extracellular pH and heat activated xtTRPV1-HeLa and xtDRG neurons. Heat also evoked nocifensive behavior. In oocytes expressing xtTRPV1, inward currents were elicited by heat and low extracellular pH. Mutagenesis analysis revealed that two amino acids (tyrosine 523 and alanine 561) were responsible for the low sensitivity to capsaicin. Taken together, our results indicate that xtTRPV1 functions as a polymodal receptor similar to its mammalian orthologues. The present study demonstrates that TRPV1 functions as a heat- and acid-sensitive channel in the ancestor of terrestrial vertebrates. Because it is possible to examine vanilloid and heat sensitivities in vitro and in vivo, X. tropicalis could be the ideal experimental lower vertebrate animal for the study of TRPV1 function. PMID:22130664

  12. Effect of changes to the manufacturer application techniques 
on the shear bond strength of simplified dental adhesives.

    PubMed

    Chasqueira, Ana Filipa; Arantes-Oliveira, Sofia; Portugal, Jaime

    2013-09-13

    The aim of this work was to assess the shear bond strength (SBS) between a composite resin and dentin, promoted by two dental adhesive systems (one-step self-etching adhesive Easy Bond [3M ESPE], and two-step etch-and-rinse adhesive Scotchbond 1XT [3M ESPE]) with different application protocols (per manufacturer's instruction (control group); with one to four additional adhesive layers; or with an extra hydrophobic adhesive layer). Proximal enamel was removed from ninety caries-free human molars to obtain two dentin discs per tooth, which were randomly assigned to twelve experimental groups (n=15). After adhesion protocol, the composite resin (Filtek Z250 [3M ESPE]) was applied. Specimens were mounted in the Watanabe test device and shear bond test was performed in a universal testing machine with a crosshead speed of 5 mm/min. Data were analyzed with ANOVA followed by Student-Newman-Keuls tests (P<0.05). The highest SBS mean value was attained with the Easy Bond three layers group (41.23±2.71 MPa) and the lowest with Scotchbond 1XT per manufacturer's instructions (27.15±2.99 MPa). Easy Bond yielded higher SBS values than Scotchbond 1XT. There were no statistically significant differences (P>0.05) between the application protocols tested, except for the three and four layers groups, that presented higher SBS results compared to manufacturer's instruction groups (P<0.05). No statistically significant differences were detected between the three and four layers groups (P≥0.05). It is recommendable to apply three adhesive layers when using Easy Bond and Scotchbond 1XT adhesives, since it improves SBS values without consuming much time.

  13. ‘tripleint_cc’: A program for 2-centre variational leptonic Coulomb potential matrix elements using Hylleraas-type trial functions, with a performance optimization study

    NASA Astrophysics Data System (ADS)

    Plummer, M.; Armour, E. A. G.; Todd, A. C.; Franklin, C. P.; Cooper, J. N.

    2009-12-01

    We present a program used to calculate intricate three-particle integrals for variational calculations of solutions to the leptonic Schrödinger equation with two nuclear centres in which inter-leptonic distances (electron-electron and positron-electron) are included directly in the trial functions. The program has been used so far in calculations of He-H¯ interactions and positron H 2 scattering, however the precisely defined integrals are applicable to other situations. We include a summary discussion of how the program has been optimized from a 'legacy'-type code to a more modern high-performance code with a performance improvement factor of up to 1000. Program summaryProgram title: tripleint.cc Catalogue identifier: AEEV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 829 No. of bytes in distributed program, including test data, etc.: 91 798 Distribution format: tar.gz Programming language: Fortran 95 (fixed format) Computer: Modern PC (tested on AMD processor) [1], IBM Power5 [2] Cray XT4 [3], similar Operating system: Red Hat Linux [1], IBM AIX [2], UNICOS [3] Has the code been vectorized or parallelized?: Serial (multi-core shared memory may be needed for some large jobs) RAM: Dependent on parameter sizes and option to use intermediate I/O. Estimates for practical use: 0.5-2 GBytes (with intermediate I/O); 1-4 GBytes (all-memory: the preferred option). Classification: 2.4, 2.6, 2.7, 2.9, 16.5, 16.10, 20 Nature of problem: The 'tripleint.cc' code evaluates three-particle integrals needed in certain variational (in particular: Rayleigh-Ritz and generalized-Kohn) matrix elements for solution of the Schrödinger equation with two fixed centres (the solutions may then be used in subsequent dynamic nuclear calculations). Specifically the integrals are defined by Eq. (16) in the main text and contain terms proportional to r×r/r,i≠j,i≠k,j≠k, with r the distance between leptons i and j. The article also briefly describes the performance optimizations used to increase the speed of evaluation of the integrals enough to allow detailed testing and mapping of the effect of varying non-linear parameters in the variational trial functions. Solution method: Each integral is solved using prolate spheroidal coordinates and series expansions (with cut-offs) of the many-lepton expressions. 1-d integrals and sub-integrals are solved analytically by various means (the program automatically chooses the most accurate of the available methods for each set of parameters and function arguments), while two of the three integrations over the prolate spheroidal coordinates ' λ' are carried out numerically. Many similar integrals with identical non-linear variational parameters may be calculated with one call of the code. Restrictions: There are limits to the number of points for the numerical integrations, to the cut-off variable itaumax for the many-lepton series expansions, and to the maximum powers of Slater-like input functions. For runs near the limit of the cut-off variable and with certain small-magnitude values of variational non-linear parameters, the code can require large amounts of memory (an option using some intermediate I/O is included to offset this). Unusual features: In addition to the program, we also present a summary description of the techniques and ideology used to optimize the code, together with accuracy tests and indications of performance improvement. Running time: The test runs take 1-15 minutes on HPCx [2] as indicated in Section 5 of the main text. A practical run with 729 integrals, 40 quadrature points per dimension and itaumax = 8 took 150 minutes on a PC (e.g., [1]): a similar run with 'medium' accuracy, e.g. for parameter optimization (see Section 2 of the main text), with 30 points per dimension and itaumax = 6 took 35 minutes. References:PC: Memory: 2.72 GB, CPU: AMD Opteron 246 dual-core, 2×2 GHz, OS: GNU/Linux, kernel: Linux 2.6.9-34.0.2.ELsmp. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/ (visited May 2009). HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/ (visited May 2009).

  14. Performance analysis of three dimensional integral equation computations on a massively parallel computer. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Logan, Terry G.

    1994-01-01

    The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.

  15. In-orbit results of Delfi-n3Xt: Lessons learned and move forward

    NASA Astrophysics Data System (ADS)

    Guo, Jian; Bouwmeester, Jasper; Gill, Eberhard

    2016-04-01

    This paper provides an update of the Delfi nanosatellite programme of the Delft University of Technology (TU Delft), with a focus on the recent in-orbit results of the second TU Delft satellite Delfi-n3Xt. In addition to the educational objective that has been reached with more than 80 students involved in the project, most of the technological objectives of Delfi-n3Xt have also been fulfilled with successful in-orbit demonstrations of payloads and platform. Among these demonstrations, four are highlighted in this paper, including a solid cool gas micropropulsion system, a new type of solar cell, a more robust Command and Data Handling Subsystem (CDHS), and a highly integrated Attitude Determination and Control Subsystem (ADCS) that performs three-axis active control using reaction wheels. Through the development of Delfi-n3Xt, significant experiences and lessons have been learned, which motivated a further step towards DelFFi, the third Delfi CubeSat mission, to demonstrate autonomous formation flying using two CubeSats named Delta and Phi. A brief update of the DelFFi mission is also provided.

  16. Effect of randomness in logistic maps

    NASA Astrophysics Data System (ADS)

    Khaleque, Abdul; Sen, Parongama

    2015-01-01

    We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However , averaged over different realizations reaches a fixed point. For 1 ≤ at ≤ 4, the system shows nonchaotic behavior and the Lyapunov exponent is strongly dependent on the asymmetry of the distribution from which at is drawn. Chaotic behavior is seen to occur beyond a threshold value of q1(q2) when q2(q1) is varied. The most striking result is that the random map is chaotic even when q2 is less than the threshold value 3.5699⋯ at which chaos occurs in the nonrandom map. We also employ a different method in which a different set of random variables are used for the evolution of two initially identical x values, here the chaotic regime exists for all q1 ≠ q2 values.

  17. A Performance Evaluation of the Cray X1 for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David

    2003-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers because of their generality, scalability, and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently-released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements.

  18. Effect of moisture on dental enamel in the interaction of two orthodontic bonding systems.

    PubMed

    Bertoz, André Pinheiro de Magalhães; de Oliveira, Derly Tescaro Narcizo; Gimenez, Carla Maria Melleiro; Briso, André Luiz Fraga; Bertoz, Francisco Antonio; Santos, Eduardo César Almada

    2013-01-01

    The purpose of this study was to assess by means of scanning electron microscopy (SEM) the remaining adhesive interface after debonding orthodontic attachments bonded to bovine teeth with the use of hydrophilic and hydrophobic primers under different dental substrate moisture conditions. Twenty mandibular incisors were divided into four groups (n = 5). In Group I, bracket bonding was performed with Transbond MIP hydrophilic primer and Transbond XT adhesive paste applied to moist substrate, and in Group II a bonding system comprising Transbond XT hydrophobic primer and adhesive paste was applied to moist substrate. Brackets were bonded to the specimens in Groups III and IV using the same adhesive systems, but on dry dental enamel. The images were qualitatively assessed by SEM. The absence of moisture in etched enamel enabled better interaction between bonding materials and the adamantine structure. The hydrophobic primer achieved the worst micromechanical interlocking results when applied to a moist dental structure, whereas the hydrophilic system proved versatile, yielding acceptable results in moist conditions and excellent interaction in the absence of contamination. The authors assert that the best condition for the application of primers to dental enamel occurs in the absence of moisture.

  19. Effect of a low-viscosity adhesive resin on the adhesion of metal brackets to enamel etched with hydrochloric or phosphoric acid combined with conventional adhesives.

    PubMed

    Yetkiner, Enver; Ozcan, Mutlu; Wegehaupt, Florian Just; Wiegand, Annette; Eden, Ece; Attin, Thomas

    2013-12-01

    This study investigated the effect of a low-viscosity adhesive resin (Icon) applied after either hydrochloric (HCl) or phosphoric acid (H3PO4) on the adhesion of metal brackets to enamel. Failure types were analyzed. The crowns of bovine incisors (N = 20) were sectioned mesio-distally and inciso-gingivally, then randomly assigned to 4 groups according to the following protocols to receive mandibular incisor brackets: 1) H3PO4 (37%)+TransbondXT (3M UNITEK); 2) H3PO4 (37%)+Icon+TransbondXT; 3) HCl (15%)+Icon (DMG)+TransbondXT 4) HCl (15%)+Icon+Heliobond (Ivoclar Vivadent)+TransbondXT. Specimens were stored in distilled water at 37°C for 24 h and thermocycled (5000x, 5°C to 55°C). The shear bond strength (SBS) test was performed using a universal testing machine (1 mm/min). Failure types were classified according to the Adhesive Remnant Index (ARI). Contact angles of adhesive resins were measured (n = 5 per adhesive) on ceramic surfaces. No significant difference in SBS was observed, implying no difference between combinations of adhesive resins and etching agents (p = 0.712; ANOVA). The Weibull distribution presented significantly lower Weibull modulus (m) of group 3 (m = 2.97) compared to other groups (m = 5.2 to 6.6) (p < 0.05). The mean SBS results (MPa) in descending order were as follows: group 4 (46.7 ± 10.3) > group 1 (45.4 ± 7.9) > group 2 (44.2 ± 10.6) > group 3 (42.6 ± 15.5). While in groups 1, 3, and 4 exclusively an ARI score of 0 (no adhesive left on tooth) was observed, in group 2, only one specimen demonstrated score 1 (less than half of adhesive left on tooth). Contact angle measurements were as follows: Icon (25.86 ± 3.81 degrees), Heliobond (31.98 ± 3.17 degrees), TransbondXT (35 ± 2.21 degrees). Icon can be safely used with the conventional adhesives tested on surfaces etched with either HCl or H3PO4.

  20. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  1. Performance Analysis of the NAS Y-MP Workload

    NASA Technical Reports Server (NTRS)

    Bergeron, Robert J.; Kutler, Paul (Technical Monitor)

    1997-01-01

    This paper describes the performance characteristics of the computational workloads on the NAS Cray Y-MP machines, a Y-MP 832 and later a Y-MP 8128. Hardware measurements indicated that the Y-MP workload performance matured over time, ultimately sustaining an average throughput of 0.8 GFLOPS and a vector operation fraction of 87%. The measurements also revealed an operation rate exceeding 1 per clock period, a well-balanced architecture featuring a strong utilization of vector functional units, and an efficient memory organization. Introduction of the larger memory 8128 increased throughput by allowing a more efficient utilization of CPUs. Throughput also depended on the metering of the batch queues; low-idle Saturday workloads required a buffer of small jobs to prevent memory starvation of the CPU. UNICOS required about 7% of total CPU time to service the 832 workloads; this overhead decreased to 5% for the 8128 workloads. While most of the system time went to service I/O requests, efficient scheduling prevented excessive idle due to I/O wait. System measurements disclosed no obvious bottlenecks in the response of the machine and UNICOS to the workloads. In most cases, Cray-provided software tools were- quite sufficient for measuring the performance of both the machine and operating, system.

  2. Polishing and toothbrushing alters the surface roughness and gloss of composite resins.

    PubMed

    Kamonkhantikul, Krid; Arksornnukit, Mansuang; Takahashi, Hidekazu; Kanehira, Masafumi; Finger, Werner J

    2014-01-01

    This study aimed to investigate the surface roughness and gloss of composite resins after using two polishing systems and toothbrushing. Six composite resins (Durafill VS, Filtek Z250, Filtek Z350 XT, Kalore, Venus Diamond, and Venus Pearl) were evaluated after polishing with two polishing systems (Sof-Lex, Venus Supra) and after toothbrushing up to 40,000 cycles. Surface roughness (Ra) and gloss were determined for each composite resin group (n=6) after silicon carbide paper grinding, polishing, and toothbrushing. Two-way ANOVA indicated significant differences in both Ra and gloss between measuring stages for the composite resins tested, except Venus Pearl, which showed significant differences only in gloss. After polishing, the Filtek Z350 XT, Kalore, and Venus Diamond showed significant increases in Ra, while all composite resin groups except the Filtek Z350 XT and Durafill VS with Sof-Lex showed increases in gloss. After toothbrushing, all composite resin demonstrated increases in Ra and decreases in gloss.

  3. ATLAS and LHC computing on CRAY

    NASA Astrophysics Data System (ADS)

    Sciacca, F. G.; Haug, S.; ATLAS Collaboration

    2017-10-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  4. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Fatoohi, Rod A.

    1990-01-01

    The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.

  5. INS3D - NUMERICAL SOLUTION OF THE INCOMPRESSIBLE NAVIER-STOKES EQUATIONS IN THREE-DIMENSIONAL GENERALIZED CURVILINEAR COORDINATES (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.

    1994-01-01

    INS3D computes steady-state solutions to the incompressible Navier-Stokes equations. The INS3D approach utilizes pseudo-compressibility combined with an approximate factorization scheme. This computational fluid dynamics (CFD) code has been verified on problems such as flow through a channel, flow over a backwardfacing step and flow over a circular cylinder. Three dimensional cases include flow over an ogive cylinder, flow through a rectangular duct, wind tunnel inlet flow, cylinder-wall juncture flow and flow through multiple posts mounted between two plates. INS3D uses a pseudo-compressibility approach in which a time derivative of pressure is added to the continuity equation, which together with the momentum equations form a set of four equations with pressure and velocity as the dependent variables. The equations' coordinates are transformed for general three dimensional applications. The equations are advanced in time by the implicit, non-iterative, approximately-factored, finite-difference scheme of Beam and Warming. The numerical stability of the scheme depends on the use of higher-order smoothing terms to damp out higher-frequency oscillations caused by second-order central differencing. The artificial compressibility introduces pressure (sound) waves of finite speed (whereas the speed of sound would be infinite in an incompressible fluid). As the solution converges, these pressure waves die out, causing the derivation of pressure with respect to time to approach zero. Thus, continuity is satisfied for the incompressible fluid in the steady state. Computational efficiency is achieved using a diagonal algorithm. A block tri-diagonal option is also available. When a steady-state solution is reached, the modified continuity equation will satisfy the divergence-free velocity field condition. INS3D is capable of handling several different types of boundaries encountered in numerical simulations, including solid-surface, inflow and outflow, and far-field boundaries. Three machine versions of INS3D are available. INS3D for the CRAY is written in CRAY FORTRAN for execution on a CRAY X-MP under COS, INS3D for the IBM is written in FORTRAN 77 for execution on an IBM 3090 under the VM or MVS operating system, and INS3D for DEC RISC-based systems is written in RISC FORTRAN for execution on a DEC workstation running RISC ULTRIX 3.1 or later. The CRAY version has a central memory requirement of 730279 words. The central memory requirement for the IBM is 150Mb. The memory requirement for the DEC RISC ULTRIX version is 3Mb of main memory. INS3D was developed in 1987. The port to the IBM was done in 1990. The port to the DECstation 3100 was done in 1991. CRAY is a registered trademark of Cray Research Inc. IBM is a registered trademark of International Business Machines. DEC, DECstation, and ULTRIX are trademarks of the Digital Equipment Corporation.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Ross, Robert B.

    This technical report describes the experiments performed to validate the MPI performance measurements reported by the CODES dragonfly network simulation with the Theta Cray XC system at the Argonne Leadership Computing Facility (ALCF).

  7. Mirror Technology Development for the International X-ray Observatory Mission

    DTIC Science & Technology

    2010-06-06

    Solar Panels E xt en si bl e O pt ic al B en ch Focal plane assembly Mirror Assembly ESA JAXA NASA Will Zhang Mirror Tech Days...0.1 m2 0.5 arcsecs 0.4 m2 15 arcsecs 0.2 m2 120 arcsecs St at e of th e A rt IXO Requirement 3 m2 5 arcsecs Will Zhang Mirror...QED Technologies, Rochester, NY Rodriguez Precision Optics, Gonzales, LA Dallas Optical Systems, Inc., Rockwall, TX RAPT Industries, Inc., Freemont

  8. Performance of a plasma fluid code on the Intel parallel computers

    NASA Technical Reports Server (NTRS)

    Lynch, V. E.; Carreras, B. A.; Drake, J. B.; Leboeuf, J. N.; Liewer, P.

    1992-01-01

    One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2.

  9. Finite Element Flow Code Optimization on the Cray T3D,

    DTIC Science & Technology

    1997-04-01

    present time, the system is configured with 512 processing elements and 32.8 Cigabytes of memory. Through a gift of time from MSCI and other arrangements, the AHPCRC has limited access to this system.

  10. Long-time behavior for suspension bridge equations with time delay

    NASA Astrophysics Data System (ADS)

    Park, Sun-Hye

    2018-04-01

    In this paper, we consider suspension bridge equations with time delay of the form u_{tt}(x,t) + Δ ^2 u (x,t) + k u^+ (x,t) + a_0 u_t (x,t) + a_1 u_t (x, t- τ ) + f(u(x,t)) = g(x). Many researchers have studied well-posedness, decay rates of energy, and existence of attractors for suspension bridge equations without delay effects. But, as far as we know, there is no work about suspension equations with time delay. In addition, there are not many studies on attractors for other delayed systems. Thus we first provide well-posedness for suspension equations with time delay. And then show the existence of global attractors and the finite dimensionality of the attractors by establishing energy functionals which are related to the norm of the phase space to our problem.

  11. Performance Assessment of the Spare Parts for the Activation of Relocated Systems (SPARES) Forecasting Model

    DTIC Science & Technology

    1991-09-01

    constant data into the gaining base’s computer records. Among the data elements to be loaded, the 1XT434 image contains the level detail effective date...the mission support effective date, and the PBR override (19:19-203). In conjunction with the 1XT434, the Mission Change Parameter Image (Constant...the gaining base (19:19-208). The level detail effective date establishes the date the MCDDFR and MCDDR "are considered by the requirements computation

  12. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, P.

    1986-01-01

    Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.

  13. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  14. Effects of Different Combinations of Er:YAG Laser-Adhesives on Enamel Demineralization and Bracket Bond Strength

    PubMed Central

    Nalçacı, Ruhi; Üşümez, Serdar; Malkoç, Sıddık

    2016-01-01

    Abstract Objective: The purpose of this study was to investigate the demineralization around brackets and shear bond strength (SBS) of brackets bonded to Er:YAG laser-irradiated enamel at different power settings with various adhesive systems combinations. Methods: A total of 108 premolar teeth were used in this study. Teeth were assigned into three groups according to the etching procedure, then each group divided into three subgroups based on the application of different adhesive systems. There were a total of nine groups as follows. Group 1: Acid + Transbond XT Primer; group 2: Er:YAG (100 mJ, 10 Hz) etching + Transbond XT Primer; group 3: Er:YAG (200 mJ, 10 Hz) etching + Transbond XT Primer; group 4: Transbond Plus self-etching primer (SEP); group 5: Er:YAG (100 mJ, 10 Hz) etching + Transbond Plus SEP; group 6: Er:YAG (200 mJ, 10 Hz) etching + Transbond Plus SEP; group 7: Clearfil Protect Bond; group 8: Er:YAG (100 mJ, 10 Hz) etching + Clearfil Protect Bond; group 9: Er:YAG (200 mJ, 10 Hz) etching + Clearfil Protect Bond. Brackets were bonded with Transbond XT Adhesive Paste in all groups. Teeth to be evaluated for demineralization and SBS were exposed to pH and thermal cyclings, respectively. Then, demineralization samples were scanned with micro-CT to determine lesion depth values. For SBS test, a universal testing machine was used and adhesive remnant was index scored after debonding. Data were analyzed statistically. Results: No significant differences were found among the lesion depth values of the various groups, except for G7 and G8, in which the lowest values were recorded. The lowest SBS values were in G7, whereas the highest were in G9. The differences between the other groups were not significant. Conclusions: Er:YAG laser did not have a positive effect on prevention of enamel demineralization. When two step self-etch adhesive is preferred for bonding brackets, laser etching at 1 W (100 mJ, 10 Hz) is suggested to improve SBS of brackets. PMID:26987047

  15. Effects of Different Combinations of Er:YAG Laser-Adhesives on Enamel Demineralization and Bracket Bond Strength.

    PubMed

    Çokakoğlu, Serpil; Nalçacı, Ruhi; Üşümez, Serdar; Malkoç, Sıddık

    2016-04-01

    The purpose of this study was to investigate the demineralization around brackets and shear bond strength (SBS) of brackets bonded to Er:YAG laser-irradiated enamel at different power settings with various adhesive systems combinations. A total of 108 premolar teeth were used in this study. Teeth were assigned into three groups according to the etching procedure, then each group divided into three subgroups based on the application of different adhesive systems. There were a total of nine groups as follows. Group 1: Acid + Transbond XT Primer; group 2: Er:YAG (100 mJ, 10 Hz) etching + Transbond XT Primer; group 3: Er:YAG (200 mJ, 10 Hz) etching + Transbond XT Primer; group 4: Transbond Plus self-etching primer (SEP); group 5: Er:YAG (100 mJ, 10 Hz) etching + Transbond Plus SEP; group 6: Er:YAG (200 mJ, 10 Hz) etching + Transbond Plus SEP; group 7: Clearfil Protect Bond; group 8: Er:YAG (100 mJ, 10 Hz) etching + Clearfil Protect Bond; group 9: Er:YAG (200 mJ, 10 Hz) etching + Clearfil Protect Bond. Brackets were bonded with Transbond XT Adhesive Paste in all groups. Teeth to be evaluated for demineralization and SBS were exposed to pH and thermal cyclings, respectively. Then, demineralization samples were scanned with micro-CT to determine lesion depth values. For SBS test, a universal testing machine was used and adhesive remnant was index scored after debonding. Data were analyzed statistically. No significant differences were found among the lesion depth values of the various groups, except for G7 and G8, in which the lowest values were recorded. The lowest SBS values were in G7, whereas the highest were in G9. The differences between the other groups were not significant. Er:YAG laser did not have a positive effect on prevention of enamel demineralization. When two step self-etch adhesive is preferred for bonding brackets, laser etching at 1 W (100 mJ, 10 Hz) is suggested to improve SBS of brackets.

  16. FAST: A multi-processed environment for visualization of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin

    1991-01-01

    Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.

  17. Distributed Finite Element Analysis Using a Transputer Network

    NASA Technical Reports Server (NTRS)

    Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy

    1989-01-01

    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.

  18. Chemical calculations on Cray computers

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.

    1989-01-01

    The influence of recent developments in supercomputing on computational chemistry is discussed with particular reference to Cray computers and their pipelined vector/limited parallel architectures. After reviewing Cray hardware and software the performance of different elementary program structures are examined, and effective methods for improving program performance are outlined. The computational strategies appropriate for obtaining optimum performance in applications to quantum chemistry and dynamics are discussed. Finally, some discussion is given of new developments and future hardware and software improvements.

  19. Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.

    1991-01-01

    The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.

  20. Corrosion behavior of aluminum-alumina composites in aerated 3.5 percent chloride solution

    NASA Astrophysics Data System (ADS)

    Acevedo Hurtado, Paul Omar

    Aluminum based metal matrix composites are finding many applications in engineering. Of these Al-Al2O3 composites appear to have promise in a number of defense applications because of their mechanical properties. However, their corrosion behavior remains suspect, especially in marine environments. While efforts are being made to improve the corrosion resistance of Al-Al2O3 composites, the mechanism of corrosion is not well known. In this study, the corrosion behavior of powder metallurgy processed Al-Cu alloy reinforced with 10, 15, 20 and 25 vol. % Al2O3 particles (XT 1129, XT 2009, XT 2048, XT 2031) was evaluated in aerated 3.5% NaCl solution using microstructural and electrochemical measurements. AA1100-O and AA2024T4 monolithic alloys were also studied for comparison purposes. The composites and unreinforced alloys were subjected to potentiodynamic polarization and Electrochemical Impedance Spectroscopy (EIS) testing. Addition of 25 vol. % Al2O 3 to the base alloys was found to increase its corrosion resistance considerably. Microstructural studies revealed the presence of intermetallic Al2Cu particles in these composites that appeared to play an important role in the observations. Pitting potential for these composites was near corrosion potential values, and repassivation potential was below the corresponding corrosion potential, indicating that these materials begin to corrode spontaneously as soon as they come in contact with the 3.5 % NaCl solution. EIS measurements indicate the occurrence of adsorption/diffusion phenomena at the interface of the composites which ultimately initiate localized or pitting corrosion. Polarization resistance values were extracted from the EIS data for all the materials tested. Electrically equivalent circuits are proposed to describe and substantiate the corrosive processes occurring in these Al-Al2O 3 composite materials.

  1. 2DRMP: A suite of two-dimensional R-matrix propagation codes

    NASA Astrophysics Data System (ADS)

    Scott, N. S.; Scott, M. P.; Burke, P. G.; Stitt, T.; Faro-Maza, V.; Denis, C.; Maniopoulou, A.

    2009-12-01

    The R-matrix method has proved to be a remarkably stable, robust and efficient technique for solving the close-coupling equations that arise in electron and photon collisions with atoms, ions and molecules. During the last thirty-four years a series of related R-matrix program packages have been published periodically in CPC. These packages are primarily concerned with low-energy scattering where the incident energy is insufficient to ionise the target. In this paper we describe 2DRMP, a suite of two-dimensional R-matrix propagation programs aimed at creating virtual experiments on high performance and grid architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies. Program summaryProgram title: 2DRMP Catalogue identifier: AEEA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 196 717 No. of bytes in distributed program, including test data, etc.: 3 819 727 Distribution format: tar.gz Programming language: Fortran 95, MPI Computer: Tested on CRAY XT4 [1]; IBM eServer 575 [2]; Itanium II cluster [3] Operating system: Tested on UNICOS/lc [1]; IBM AIX [2]; Red Hat Linux Enterprise AS [3] Has the code been vectorised or parallelised?: Yes. 16 cores were used for small test run Classification: 2.4 External routines: BLAS, LAPACK, PBLAS, ScaLAPACK Subprograms used: ADAZ_v1_1 Nature of problem: 2DRMP is a suite of programs aimed at creating virtual experiments on high performance architectures to enable the study of electron scattering from H-like atoms and ions at intermediate energies. Solution method: Two-dimensional R-matrix propagation theory. The (r,r) space of the internal region is subdivided into a number of subregions. Local R-matrices are constructed within each subregion and used to propagate a global R-matrix, ℜ, across the internal region. On the boundary of the internal region ℜ is transformed onto the IERM target state basis. Thus, the two-dimensional R-matrix propagation technique transforms an intractable problem into a series of tractable problems enabling the internal region to be extended far beyond that which is possible with the standard one-sector codes. A distinctive feature of the method is that both electrons are treated identically and the R-matrix basis states are constructed to allow for both electrons to be in the continuum. The subregion size is flexible and can be adjusted to accommodate the number of cores available. Restrictions: The implementation is currently restricted to electron scattering from H-like atoms and ions. Additional comments: The programs have been designed to operate on serial computers and to exploit the distributed memory parallelism found on tightly coupled high performance clusters and supercomputers. 2DRMP has been systematically and comprehensively documented using ROBODoc [4] which is an API documentation tool that works by extracting specially formatted headers from the program source code and writing them to documentation files. Running time: The wall clock running time for the small test run using 16 cores and performed on [3] is as follows: bp (7 s); rint2 (34 s); newrd (32 s); diag (21 s); amps (11 s); prop (24 s). References:HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/, accessed 22 July, 2009. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/, accessed 22 July, 2009. HP Cluster, Itanium II cluster running Red Hat Linux Enterprise AS, Queen s University Belfast, http://www.qub.ac.uk/directorates/InformationServices/Research/HighPerformanceComputing/Services/Hardware/HPResearch/, accessed 22 July, 2009. Automating Software Documentation with ROBODoc, http://www.xs4all.nl/~rfsber/Robo/, accessed 22 July, 2009.

  2. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  3. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  4. Comparative evaluation of microleakage of lingual retainer wires bonded with three different lingual retainer composites: an in vitro study.

    PubMed

    Nimbalkar-Patil, Smita; Vaz, Anna; Patil, Pravinkumar G

    2014-11-01

    To evaluate microleakage when two types of retainer wires were bonded with two light cured and a self cured lingual retainer composites. Total 120 freshly extracted human mandibular incisor teeth were collected and separated into six subgroups of 20 teeth each. Two different wires, a 0.036 inch hard round stainless steel (HRSS) wire sandblasted at the ends and 0.0175 inch multistranded wire bonded onto the lingual surfaces of the incisors with three different types of composite resins of 3M company; Concise Orthodontic (self-cure), Transbond XT (light-cure) and Transbond LR (light-cure). Specimens were further sealed with a nail varnish, stained with 0.5% basic fuchsine for 24 hours, sectioned and examined under a stereomicroscope, and scored for microleakage for the enamel-composite and wire-composite interfaces. Statistical analysis was performed by Kruskal-Wallis and Mann-Whitney U-tests. For HRSS wire, at the enamel-composite interface, the microleakage was least with Transbond LR followed by Concise Orthodontic and greatest for Transbond XT (p<0.05). At the wire composite interface too, the microleakage was in order of Transbond LR

  5. Definition of propulsion system for V/STOL research and technology aircraft

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Wind tunnel test support, aircraft contractor support, a propulsion system computer card deck, preliminary design studies, and propulsion system development plan are reported. The Propulsion system consists of two lift/cruise turbofan engines, one turboshaft engine and one lift fan connected together with shafting into a combiner gearbox. Distortion parameter levels from 40 x 80 test data were within the established XT701-AD-700 limits. The three engine-three fan system card deck calculates either vertical or conventional flight performance, installed or uninstalled. Design study results for XT701 engine modifications, bevel gear cross shaft location, fixed and tilt fan frames and propulsion system controls are described. Optional water-alcohol injection increased total net thrust 10.3% on a 90 F day. Engines have sufficient turbine life for 500 hours of the RTA duty cycle.

  6. Biocompatibility of orthodontic adhesives in rat subcutaneous tissue

    PubMed Central

    dos SANTOS, Rogério Lacerda; PITHON, Matheus Melo; FERNANDES, Alline Birra Nolasco; CABRAL, Márcia Grillo; RUELLAS, Antônio Carlos de Oliveira

    2010-01-01

    Objective The objective of the present study was to verify the hypothesis that no difference in biocompatibility exists between different orthodontic adhesives. Material and Methods Thirty male Wistar rats were used in this study and divided into five groups (n=6): Group 1 (control, distilled water), Group 2 (Concise), Group 3 (Xeno III), Group 4 (Transbond XT), and Group 5 (Transbond plus Self-Etching Primer). Two cavities were performed in the subcutaneous dorsum of each animal to place a polyvinyl sponge soaked with 2 drops of the respective adhesive in each surgical loci. Two animals of each group were sacrificed after 7, 15, and 30 days, and their tissues were analyzed by using an optical microscope. Results At day 7, Groups 3 (Transbond XT) and 4 (Xeno III) showed intense mono- and polymorphonuclear inflammatory infiltrate with no differences between them, whereas Groups 1 (control) and 2 (Concise) showed moderate mononuclear inflammatory infiltrate. At day 15, severe inflammation was observed in Group 3 (Transbond XT) compared to other groups. At day 30, the same group showed a more expressive mononuclear inflammatory infiltrate compared to other groups. Conclusion Among the orthodontic adhesive analyzed, it may be concluded that Transbond XT exhibited the worst biocompatibility. However, one cannot interpret the specificity of the data generated in vivo animal models as a human response. PMID:21085807

  7. Opening Remarks: SciDAC 2007

    NASA Astrophysics Data System (ADS)

    Strayer, Michael

    2007-09-01

    Good morning. Welcome to Boston, the home of the Red Sox, Celtics and Bruins, baked beans, tea parties, Robert Parker, and SciDAC 2007. A year ago I stood before you to share the legacy of the first SciDAC program and identify the challenges that we must address on the road to petascale computing—a road E E Cummins described as `. . . never traveled, gladly beyond any experience.' Today, I want to explore the preparations for the rapidly approaching extreme scale (X-scale) generation. These preparations are the first step propelling us along the road of burgeoning scientific discovery enabled by the application of X- scale computing. We look to petascale computing and beyond to open up a world of discovery that cuts across scientific fields and leads us to a greater understanding of not only our world, but our universe. As part of the President's America Competitiveness Initiative, the ASCR Office has been preparing a ten year vision for computing. As part of this planning the LBNL together with ORNL and ANL hosted three town hall meetings on Simulation and Modeling at the Exascale for Energy, Ecological Sustainability and Global Security (E3). The proposed E3 initiative is organized around four programmatic themes: Engaging our top scientists, engineers, computer scientists and applied mathematicians; investing in pioneering large-scale science; developing scalable analysis algorithms, and storage architectures to accelerate discovery; and accelerating the build-out and future development of the DOE open computing facilities. It is clear that we have only just started down the path to extreme scale computing. Plan to attend Thursday's session on the out-briefing and discussion of these meetings. The road to the petascale has been at best rocky. In FY07, the continuing resolution provided 12% less money for Advanced Scientific Computing than either the President, the Senate, or the House. As a consequence, many of you had to absorb a no cost extension for your SciDAC work. I am pleased that the President's FY08 budget restores the funding for SciDAC. Quoting from Advanced Scientific Computing Research description in the House Energy and Water Development Appropriations Bill for FY08, "Perhaps no other area of research at the Department is so critical to sustaining U.S. leadership in science and technology, revolutionizing the way science is done and improving research productivity." As a society we need to revolutionize our approaches to energy, environmental and global security challenges. As we go forward along the road to the X-scale generation, the use of computation will continue to be a critical tool along with theory and experiment in understanding the behavior of the fundamental components of nature as well as for fundamental discovery and exploration of the behavior of complex systems. The foundation to overcome these societal challenges will build from the experiences and knowledge gained as you, members of our SciDAC research teams, work together to attack problems at the tera- and peta- scale. If SciDAC is viewed as an experiment for revolutionizing scientific methodology, then a strategic goal of ASCR program must be to broaden the intellectual base prepared to address the challenges of the new X-scale generation of computing. We must focus our computational science experiences gained over the past five years on the opportunities introduced with extreme scale computing. Our facilities are on a path to provide the resources needed to undertake the first part of our journey. Using the newly upgraded 119 teraflop Cray XT system at the Leadership Computing Facility, SciDAC research teams have in three days performed a 100-year study of the time evolution of the atmospheric CO2 concentration originating from the land surface. The simulation of the El Nino/Southern Oscillation which was part of this study has been characterized as `the most impressive new result in ten years' gained new insight into the behavior of superheated ionic gas in the ITER reactor as a result of an AORSA run on 22,500 processors that achieved over 87 trillion calculations per second (87 teraflops) which is 74% of the system's theoretical peak. Tomorrow, Argonne and IBM will announce that the first IBM Blue Gene/P, a 100 teraflop system, will be shipped to the Argonne Leadership Computing Facility later this fiscal year. By the end of FY2007 ASCR high performance and leadership computing resources will include the 114 teraflop IBM Blue Gene/P; a 102 teraflop Cray XT4 at NERSC and a 119 teraflop Cray XT system at Oak Ridge. Before ringing in the New Year, Oak Ridge will upgrade to 250 teraflops with the replacement of the dual core processors with quad core processors and Argonne will upgrade to between 250-500 teraflops, and next year, a petascale Cray Baker system is scheduled for delivery at Oak Ridge. The multidisciplinary teams in our SciDAC Centers for Enabling Technologies and our SciDAC Institutes must continue to work with our Scientific Application teams to overcome the barriers that prevent effective use of these new systems. These challenges include: the need for new algorithms as well as operating system and runtime software and tools which scale to parallel systems composed of hundreds of thousands processors; program development environments and tools which scale effectively and provide ease of use for developers and scientific end users; and visualization and data management systems that support moving, storing, analyzing, manipulating and visualizing multi-petabytes of scientific data and objects. The SciDAC Centers, located primarily at our DOE national laboratories will take the lead in ensuring that critical computer science and applied mathematics issues are addressed in a timely and comprehensive fashion and to address issues associated with research software lifecycle. In contrast, the SciDAC Institutes, which are university-led centers of excellence, will have more flexibility to pursue new research topics through a range of research collaborations. The Institutes will also work to broaden the intellectual and researcher base—conducting short courses and summer schools to take advantage of new high performance computing capabilities. The SciDAC Outreach Center at Lawrence Berkeley National Laboratory complements the outreach efforts of the SciDAC Institutes. The Outreach Center is our clearinghouse for SciDAC activities and resources and will communicate with the high performance computing community in part to understand their needs for workshops, summer schools and institutes. SciDAC is not ASCR's only effort to broaden the computational science community needed to meet the challenges of the new X-scale generation. I hope that you were able to attend the Computational Science Graduate Fellowship poster session last night. ASCR developed the fellowship in 1991 to meet the nation's growing need for scientists and technology professionals with advanced computer skills. CSGF, now jointly funded between ASCR and NNSA, is more than a traditional academic fellowship. It has provided more than 200 of the best and brightest graduate students with guidance, support and community in preparing them as computational scientists. Today CSGF alumni are bringing their diverse top-level skills and knowledge to research teams at DOE laboratories and in industries such as Proctor and Gamble, Lockheed Martin and Intel. At universities they are working to train the next generation of computational scientists. To build on this success, we intend to develop a wholly new Early Career Principal Investigator's (ECPI) program. Our objective is to stimulate academic research in scientific areas within ASCR's purview especially among faculty in early stages of their academic careers. Last February, we lost Ken Kennedy, one of the leading lights of our community. As we move forward into the extreme computing generation, his vision and insight will be greatly missed. In memorial to Ken Kennedy, we shall designate the ECPI grants to beginning faculty in Computer Science as the Ken Kennedy Fellowship. Watch the ASCR website for more information about ECPI and other early career programs in the computational sciences. We look to you, our scientists, researchers, and visionaries to take X-scale computing and use it to explode scientific discovery in your fields. We at SciDAC will work to ensure that this tool is the sharpest and most precise and efficient instrument to carve away the unknown and reveal the most exciting secrets and stimulating scientific discoveries of our time. The partnership between research and computing is the marriage that will spur greater discovery, and as Spencer said to Susan in Robert Parker's novel, `Sudden Mischief', `We stick together long enough, and we may get as smart as hell'. Michael Strayer

  8. Software Technology for Adaptable, Reliable Systems (STARS) (User Manual). Ada Command Environment (ACE) Version 8.0 Sun OS Implementation

    DTIC Science & Technology

    1990-10-29

    the equivalent type names in the basic X libary . 37. Intrinsics Contains the type declarations common to all Xt toolkit routines. 38. Widget-Package...Memory-Size constant Integer 1; MinInt constant I-reger Integer’First; MaxInt const-i’ integer Integer’Last; -- Max- Digits constant Integer 1; -- MaxMan...connection between some type names used by Xt routines and the equivalent type names in the basic X libary . .package RenamedXlibTypes is P;’ge 65 29

  9. TRASYS - THERMAL RADIATION ANALYZER SYSTEM (CRAY VERSION WITH NASADIG)

    NASA Technical Reports Server (NTRS)

    Anderson, G. E.

    1994-01-01

    The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.

  10. New tools using the hardware performance monitor to help users tune programs on the Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Rudsinski, L.; Doak, J.

    1991-09-25

    The performance of a Cray system is highly dependent on the tuning techniques used by individuals on their codes. Many of our users were not taking advantage of the tuning tools that allow them to monitor their own programs by using the Hardware Performance Monitor (HPM). We therefore modified UNICOS to collect HPM data for all processes and to report Mflop ratings based on users, programs, and time used. Our tuning efforts are now being focused on the users and programs that have the best potential for performance improvements. These modifications and some of the more striking performance improvements aremore » described.« less

  11. Deploying Darter A Cray XC30 System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahey, Mark R; Budiardja, Reuben D; Crosby, Lonnie D

    TheUniversityofTennessee,KnoxvilleacquiredaCrayXC30 supercomputer, called Darter, with a peak performance of 248.9 Ter- aflops. Darter was deployed in late March of 2013 with a very aggressive production timeline - the system was deployed, accepted, and placed into production in only 2 weeks. The Spring Experiment for the Center for Analysis and Prediction of Storms (CAPS) largely drove the accelerated timeline, as the experiment was scheduled to start in mid-April. The Consortium for Advanced Simulation of Light Water Reactors (CASL) project also needed access and was able to meet their tight deadlines on the newly acquired XC30. Darter s accelerated deployment and op-more » erations schedule resulted in substantial scientific impacts within the re- search community as well as immediate real-world impacts such as early severe tornado warnings« less

  12. Chromosome Banding in Amphibia. XXXVI. Multimorphic Sex Chromosomes and an Enigmatic Sex Determination in Eleutherodactylus johnstonei (Anura, Eleutherodactylidae).

    PubMed

    Schmid, Michael; Steinlein, Claus

    2018-01-01

    A detailed cytogenetic study on the leaf litter frog Eleutherodactylus johnstonei from 14 different Caribbean islands and the mainlands of Venezuela and Guyana revealed the existence of multimorphic XY♂/XX♀ sex chromosomes 14. Their male sex determination and development depends either on the presence of 2 telocentric chromosomes 14 (XtYt), or on 1 submetacentric chromosome 14 (Xsm) plus 1 telocentric chromosome 14 (Yt), or on the presence of 2 submetacentric chromosomes 14 (XsmYsm). The female sex determination and development requires either the presence of 2 telocentric chromosomes 14 (XtXt) or 2 submetacentric chromosomes 14 (XsmXsm). In all individuals analyzed, the sex chromosomes 14 carry a prominent nucleolus organizer region in their long arms. An explanation is given for the origin of the (XtYt)♂, (XsmYt)♂, (XsmYsm)♂, (XtXt)♀, and (XsmXsm)♀ in the different populations of E. johnstonei. Furthermore, the present study gives detailed data on the chromosome banding patterns, in situ hybridization experiments, and the genome size of E. johnstonei. © 2018 S. Karger AG, Basel.

  13. Optic Nerve Sheath Tethering in Adduction Occurs in Esotropia and Hypertropia, But Not in Exotropia

    PubMed Central

    Suh, Soh Youn; Clark, Robert A.; Demer, Joseph L.

    2018-01-01

    Purpose Repetitive strain to the optic nerve (ON) due to tethering in adduction has been recently proposed as an intraocular pressure-independent mechanism of optic neuropathy in primary open-angle glaucoma. Since strabismus may alter adduction, we investigated whether gaze-related ON straightening and associated globe translation differ in horizontal and vertical strabismus. Methods High-resolution orbital magnetic resonance imaging was obtained in 2-mm thick quasi-coronal planes using surface coils in 25 subjects (49 orbits) with esotropia (ET, 19 ± 3.6Δ SEM), 11 (15 orbits) with exotropia (XT, 33.7 ± 7.3Δ), 7 (12 orbits) with hypertropia (HT, 14.6 ± 3.2Δ), and 31 normal controls (62 orbits) in target-controlled central gaze, and in maximum attainable abduction and adduction. Area centroids were used to determine ON path sinuosity and globe positions. Results Adduction angles achieved in ET (30.6° ± 0.9°) and HT (27.2° ± 2.3°) did not significantly differ from normal (28.3° ± 0.7°), but significantly less adduction was achieved in XT (19.0° ± 2.5°, P = 0.005). ON sheath tethering in adduction occurred in ET and HT similarly to normal, but did not in XT. The globe translated significantly less than normal, nasally in adduction in XT and temporally in abduction in ET and HT (P < 0.02, for all). Globe retraction did not occur during abduction or adduction in any group. Conclusions Similar to normal subjects, the ON and sheath become tethered without globe retraction in ET and HT. In XT, adduction tethering does not occur, possibly due to limited adduction angle. Thus, therapeutic limitation of adduction could be considered as a possible treatment for ON sheath tethering.

  14. Bacterial colonization of the implant-abutment interface of conical connection with an internal octagon: an in vitro study using real-time PCR.

    PubMed

    Baj, A; Beltramini, G A; Bolzoni, A; Cura, F; Palmieri, A; Scarano, A; Ottria, L; Giannì, A B

    2017-01-01

    Bacterial leakage at the implant-abutment connection of a two-piece implant system is considered the main cause of peri-implantitis. Prevention of bacterial leakage at the implant-abutment connection is mandatory for reducing inflammation process around implant neck and achieving bone stability. Micro-cavities at implant-abutment connection level can favour bacterial leakage, even in modern two-piece implant systems. The conical connection with an internal octagon (CCIO) is considered to be more stable mechanically and allows a more tight link between implant and abutment. As P. gingivalis and T. forsythia penetration might have clinical relevance, it was the purpose of this investigation to evaluate molecular leakage of these two bacteria in a new two-implant system with an internal conical implant-abutment connection with internal octagon (Shiner XT, FMD Falappa Medical Devices S.p.A. Rome, Italy). To verify the ability of the implant in protecting the internal space from the external environment, the passage of genetically modified Escherichia c oli across implant-abutment interface was evaluated. Four Shiner XT implants (FMD, Falappa Medical Devices®, Rome, Italy) were immerged in a bacterial culture for 24 h and bacteria amount was measured inside implant-abutment interface with Real-time PCR. Bacteria were detected inside all studied implants, with a median percentage of 6% for P. gingivalis and 5% for T. forsythia. Other comparable studies about the tightness of the tested implant system reported similar results. The gap size at the implant-abutment connection of CCIOs was measured by other authors discovering a gap size of 1–2μm of the AstraTech system and of 4μm for the Ankylos system. Bacterial leakage along implant-abutment connection of cylindrical and tapered implants, Shiner XT, (FMD Falappa Medical Devices S.p.A. Rome, Italy) showed better results compared to other implants. Additional studies are needed to explore the relationship in terms of microbiota of the CCIO. In addition, the dynamics of internal colonization needs to be thoroughly documented in longitudinal in vivo studies.

  15. ORNL Cray X1 evaluation status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less

  16. The Secret Life of Quarks, Final Report for the University of North Carolina at Chapel Hill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, Robert J.

    This final report summarizes activities and results at the University of North Carolina as part of the the SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Quantum Chromodynamics. The overall objective of the project is to construct the software needed to study quantum chromo- dynamics (QCD), the theory of the strong interactions of subatomic physics, and similar strongly coupled gauge theories anticipated to be of importance in the LHC era. It built upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API)more » was developed that enables lat- tice gauge theorists to make effective use of a wide variety of massively parallel computers. In the SciDAC-2 project, optimized versions of the QCD API were being created for the IBM Blue- Gene/L (BG/L) and BlueGene/P (BG/P), the Cray XT3/XT4 and its successors, and clusters based on multi-core processors and Infiniband communications networks. The QCD API is being used to enhance the performance of the major QCD community codes and to create new applications. Software libraries of physics tools have been expanded to contain sharable building blocks for inclusion in application codes, performance analysis and visualization tools, and software for au- tomation of physics work flow. New software tools were designed for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. As part of the overall project, researchers at UNC were funded through ASCR to work in three general areas. The main thrust has been performance instrumentation and analysis in support of the SciDAC QCD code base as it evolved and as it moved to new computation platforms. In support of the performance activities, performance data was to be collected in a database for the purpose of broader analysis. Third, the UNC work was done at RENCI (Renaissance Computing Institute), which has extensive expertise and facilities for scientific data visualization, so we acted in an ongoing consulting and support role in that area.« less

  17. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  18. Genetic Analysis of Hedgehog Signaling in Ventral Body Wall Development and the Onset of Omphalocele Formation

    PubMed Central

    Matsumaru, Daisuke; Haraguchi, Ryuma; Miyagawa, Shinichi; Motoyama, Jun; Nakagata, Naomi; Meijlink, Frits; Yamada, Gen

    2011-01-01

    Background An omphalocele is one of the major ventral body wall malformations and is characterized by abnormally herniated viscera from the body trunk. It has been frequently found to be associated with other structural malformations, such as genitourinary malformations and digit abnormalities. In spite of its clinical importance, the etiology of omphalocele formation is still controversial. Hedgehog (Hh) signaling is one of the essential growth factor signaling pathways involved in the formation of the limbs and urogenital system. However, the relationship between Hh signaling and ventral body wall formation remains unclear. Methodology/Principal Findings To gain insight into the roles of Hh signaling in ventral body wall formation and its malformation, we analyzed phenotypes of mouse mutants of Sonic hedgehog (Shh), GLI-Kruppel family member 3 (Gli3) and Aristaless-like homeobox 4 (Alx4). Introduction of additional Alx4Lst mutations into the Gli3Xt/Xt background resulted in various degrees of severe omphalocele and pubic diastasis. In addition, loss of a single Shh allele restored the omphalocele and pubic symphysis of Gli3Xt/+; Alx4Lst/Lst embryos. We also observed ectopic Hh activity in the ventral body wall region of Gli3Xt/Xt embryos. Moreover, tamoxifen-inducible gain-of-function experiments to induce ectopic Hh signaling revealed Hh signal dose-dependent formation of omphaloceles. Conclusions/Significance We suggest that one of the possible causes of omphalocele and pubic diastasis is ectopically-induced Hh signaling. To our knowledge, this would be the first demonstration of the involvement of Hh signaling in ventral body wall malformation and the genetic rescue of omphalocele phenotypes. PMID:21283718

  19. Documentation for the “XT3D” option in the Node Property Flow (NPF) Package of MODFLOW 6

    USGS Publications Warehouse

    Provost, Alden M.; Langevin, Christian D.; Hughes, Joseph D.

    2017-08-10

    This report describes the “XT3D” option in the Node Property Flow (NPF) Package of MODFLOW 6. The XT3D option extends the capabilities of MODFLOW by enabling simulation of fully three-dimensional anisotropy on regular or irregular grids in a way that properly takes into account the full, three-dimensional conductivity tensor. It can also improve the accuracy of groundwater-flow simulations in cases in which the model grid violates certain geometric requirements. Three example problems demonstrate the use of the XT3D option to simulate groundwater flow on irregular grids and through three-dimensional porous media with anisotropic hydraulic conductivity.Conceptually, the XT3D method of estimating flow between two MODFLOW 6 model cells can be viewed in terms of three main mathematical steps: construction of head-gradient estimates by interpolation; construction of fluid-flux estimates by application of the full, three-dimensional form of Darcy’s Law, in which the conductivity tensor can be heterogeneous and anisotropic; and construction of the flow expression by enforcement of continuity of flow across the cell interface. The resulting XT3D flow expression, which relates the flow across the cell interface to the values of heads computed at neighboring nodes, is the sum of terms in which conductance-like coefficients multiply head differences, as in the conductance-based flow expression the NPF Package uses by default. However, the XT3D flow expression contains terms that involve “neighbors of neighbors” of the two cells for which the flow is being calculated. These additional terms have no analog in the conductance-based formulation. When assembled into matrix form, the XT3D formulation results in a larger stencil than the conductance-based formulation; that is, each row of the coefficient matrix generally contains more nonzero elements. The “RHS” suboption can be used to avoid expanding the stencil by placing the additional terms on the right-hand side of the matrix equation and evaluating them at the previous iteration or time step.The XT3D option can be an alternative to the Ghost-Node Correction (GNC) Package. However, the XT3D formulation is typically more computationally intensive than the conductance-based formulation the NPF Package uses by default, either with or without ghost nodes. Before deciding whether to use the GNC Package or XT3D option for production runs, the user should consider whether the conductance-based formulation alone can provide acceptable accuracy for the particular problem being solved.

  20. Using the K-25 C TD Common File System: A guide to CFSI (CFS Interface)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-12-01

    A CFS (Common File System) is a large, centralized file management and storage facility based on software developed at Los Alamos National Laboratory. This manual is a guide to use of the CFS available to users of the Cray UNICOS system at Martin Marietta Energy Systems, Inc., in Oak Ridge, Tennessee.

  1. Thermophilic fermentation of acetoin and 2,3-butanediol by a novel Geobacillus strain

    PubMed Central

    2012-01-01

    Background Acetoin and 2,3-butanediol are two important biorefinery platform chemicals. They are currently fermented below 40°C using mesophilic strains, but the processes often suffer from bacterial contamination. Results This work reports the isolation and identification of a novel aerobic Geobacillus strain XT15 capable of producing both of these chemicals under elevated temperatures, thus reducing the risk of bacterial contamination. The optimum growth temperature was found to be between 45 and 55°C and the medium initial pH to be 8.0. In addition to glucose, galactose, mannitol, arabionose, and xylose were all acceptable substrates, enabling the potential use of cellulosic biomass as the feedstock. XT15 preferred organic nitrogen sources including corn steep liquor powder, a cheap by-product from corn wet-milling. At 55°C, 7.7 g/L of acetoin and 14.5 g/L of 2,3-butanediol could be obtained using corn steep liquor powder as a nitrogen source. Thirteen volatile products from the cultivation broth of XT15 were identified by gas chromatography–mass spectrometry. Acetoin, 2,3-butanediol, and their derivatives including a novel metabolite 2,3-dihydroxy-3-methylheptan-4-one, accounted for a total of about 96% of all the volatile products. In contrast, organic acids and other products were minor by-products. α-Acetolactate decarboxylase and acetoin:2,6-dichlorophenolindophenol oxidoreductase in XT15, the two key enzymes in acetoin metabolic pathway, were found to be both moderately thermophilic with the identical optimum temperature of 45°C. Conclusions Geobacillus sp. XT15 is the first naturally occurring thermophile excreting acetoin and/or 2,3-butanediol. This work has demonstrated the attractive prospect of developing it as an industrial strain in the thermophilic fermentation of acetoin and 2,3-butanediol with improved anti-contamination performance. The novel metabolites and enzymes identified in XT15 also indicated its strong promise as a precious biological resource. Thermophilic fermentation also offers great prospect for improving its yields and efficiencies. This remains a core aim for future work. PMID:23217110

  2. A comparison of shear bond strength of orthodontic brackets bonded with four different orthodontic adhesives

    PubMed Central

    Sharma, Sudhir; Tandon, Pradeep; Nagar, Amit; Singh, Gyan P; Singh, Alka; Chugh, Vinay K

    2014-01-01

    Objectives: The objective of this study is to compare the shear bond strength (SBS) of stainless steel (SS) orthodontic brackets bonded with four different orthodontic adhesives. Materials and Methods: Eighty newly extracted premolars were bonded to 0.022 SS brackets (Ormco, Scafati, Italy) and equally divided into four groups based on adhesive used: (1) Rely-a-Bond (self-cure adhesive, Reliance Orthodontic Product, Inc., Illinois, USA), (2) Transbond XT (light-cure adhesive, 3M Unitek, CA, USA), (3) Transbond Plus (sixth generation self-etch primer, 3M Unitek, CA, USA) with Transbond XT (4) Xeno V (seventh generation self-etch primer, Dentsply, Konstanz, Germany) with Xeno Ortho (light-cure adhesive, Dentsply, Konstanz, Germany) adhesive. Brackets were debonded with a universal testing machine (Model No. 3382 Instron Corp., Canton, Mass, USA). The adhesive remnant index (ARI) was recordedIn addition, the conditioned enamel surfaces were observed under a scanning electron microscope (SEM). Results: Transbond XT (15.49 MPa) attained the highest bond strength. Self-etching adhesives (Xeno V, 13.51 MPa; Transbond Plus, 11.57 MPa) showed clinically acceptable SBS values and almost clean enamel surface after debonding. The analysis of variance (F = 11.85, P < 0.0001) and Chi-square (χ2 = 18.16, P < 0.05) tests revealed significant differences among groups. The ARI score of 3 (i.e., All adhesives left on the tooth) to be the most prevalent in Transbond XT (40%), followed by Rely-a-Bond (30%), Transbond Plus with Transbond XT (15%), and Xeno V with Xeno Ortho (10%). Under SEM, enamel surfaces after debonding of the brackets appeared porous when an acid-etching process was performed on the surfaces of Rely-a-Bond and Transbond XT, whereas with self-etching primers enamel presented smooth and almost clean surfaces (Transbond Plus and Xeno V group). Conclusion: All adhesives yielded SBS values higher than the recommended bond strength (5.9-7–8 MPa), Seventh generation self-etching primer Xeno V with Xeno Ortho showed clinically acceptable SBS and the least amount of residual adhesive left on the enamel surface after debonding. PMID:24987660

  3. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  4. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.

  5. NAS technical summaries: Numerical aerodynamic simulation program, March 1991 - February 1992

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefiting other supercomputer centers in Government and industry. This report contains selected scientific results from the 1991-92 NAS Operational Year, March 4, 1991 to March 3, 1992, which is the fifth year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP. The Cray-2, the first generation supercomputer, has four processors, 256 megawords of central memory, and a total sustained speed of 250 million floating point operations per second. The Cray Y-MP, the second generation supercomputer, has eight processors and a total sustained speed of one billion floating point operations per second. Additional memory was installed this year, doubling capacity from 128 to 256 megawords of solid-state storage-device memory. Because of its higher performance, the Cray Y-MP delivered approximately 77 percent of the total number of supercomputer hours used during this year.

  6. NeXT Application Development Workshop. [Use and Design of Instructional Applications on the NeXT Computer.

    ERIC Educational Resources Information Center

    Kiel, Don; And Others

    Instructional applications for NeXT computers were developed by nine faculty members from the biology, mathematics and computer science, fine arts, chemistry, physics and astronomy, and geology departments as part of a grant awarded to the California State University at Los Angeles. These notes provide a schedule of events and reports from a 2-day…

  7. INS3D - NUMERICAL SOLUTION OF THE INCOMPRESSIBLE NAVIER-STOKES EQUATIONS IN THREE-DIMENSIONAL GENERALIZED CURVILINEAR COORDINATES (DEC RISC ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    Biyabani, S. R.

    1994-01-01

    INS3D computes steady-state solutions to the incompressible Navier-Stokes equations. The INS3D approach utilizes pseudo-compressibility combined with an approximate factorization scheme. This computational fluid dynamics (CFD) code has been verified on problems such as flow through a channel, flow over a backwardfacing step and flow over a circular cylinder. Three dimensional cases include flow over an ogive cylinder, flow through a rectangular duct, wind tunnel inlet flow, cylinder-wall juncture flow and flow through multiple posts mounted between two plates. INS3D uses a pseudo-compressibility approach in which a time derivative of pressure is added to the continuity equation, which together with the momentum equations form a set of four equations with pressure and velocity as the dependent variables. The equations' coordinates are transformed for general three dimensional applications. The equations are advanced in time by the implicit, non-iterative, approximately-factored, finite-difference scheme of Beam and Warming. The numerical stability of the scheme depends on the use of higher-order smoothing terms to damp out higher-frequency oscillations caused by second-order central differencing. The artificial compressibility introduces pressure (sound) waves of finite speed (whereas the speed of sound would be infinite in an incompressible fluid). As the solution converges, these pressure waves die out, causing the derivation of pressure with respect to time to approach zero. Thus, continuity is satisfied for the incompressible fluid in the steady state. Computational efficiency is achieved using a diagonal algorithm. A block tri-diagonal option is also available. When a steady-state solution is reached, the modified continuity equation will satisfy the divergence-free velocity field condition. INS3D is capable of handling several different types of boundaries encountered in numerical simulations, including solid-surface, inflow and outflow, and far-field boundaries. Three machine versions of INS3D are available. INS3D for the CRAY is written in CRAY FORTRAN for execution on a CRAY X-MP under COS, INS3D for the IBM is written in FORTRAN 77 for execution on an IBM 3090 under the VM or MVS operating system, and INS3D for DEC RISC-based systems is written in RISC FORTRAN for execution on a DEC workstation running RISC ULTRIX 3.1 or later. The CRAY version has a central memory requirement of 730279 words. The central memory requirement for the IBM is 150Mb. The memory requirement for the DEC RISC ULTRIX version is 3Mb of main memory. INS3D was developed in 1987. The port to the IBM was done in 1990. The port to the DECstation 3100 was done in 1991. CRAY is a registered trademark of Cray Research Inc. IBM is a registered trademark of International Business Machines. DEC, DECstation, and ULTRIX are trademarks of the Digital Equipment Corporation.

  8. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  9. TOP500 Sublist for November 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack J.

    2001-11-09

    18th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, GERMANY; KNOXVILLE, TENN.; BERKELEY, CALIF. In what has become a much-anticipated event in the world of high-performance computing, the 18th edition of the TOP500 list of the world's fastest supercomputers was released today (November 9, 2001). The latest edition of the twice-yearly ranking finds IBM as the leader in the field, with 32 percent in terms of installed systems and 37 percent in terms of total performance of all the installed systems. In a surprise move Hewlett-Packard captured the second place with 30 percent of the systems. Most ofmore » these systems are smaller in size and as a consequence HP's share of installed performance is smaller with 15 percent. This is still enough for second place in this category. SGI, Cray and Sun follow in the number of TOP500 systems with 41 (8 percent), 39 (8 percent), and 31 (6 percent) respectively. In the category of installed performance Cray Inc. keeps the third position with 11 percent ahead of SGI (8 percent) and Compaq (8 percent).« less

  10. Using IMSL Mathematical and Statistical Computer Subroutines in Physiological and Biomechanical Research

    DTIC Science & Technology

    1987-10-01

    NDATA). CSCOEF(4.NDATA). CSVAL. ERROR, F. kFDATA(NDATA). FLOAT, FVAL. RNUNF, SDEV. SMPAR. SQRT. a SYAL . WEIGHT(NDATA), X. XDATA(NDATA). XT INTRINSIC...BREAK.CSCOEF) FYAL - F(XT) ERROR a SVAL - FVAL WRITE (NOUT,’(4F15.4)’) XT. FYAL. SYAL , ERROR 30 CONTINUE C 99999 FORMAT (12X. ’X’. 9X. ’Function’. 7X

  11. Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.

  12. SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)

    NASA Technical Reports Server (NTRS)

    Coe, H. H.

    1994-01-01

    The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christoph, G.G; Jackson, K.A.; Neuman, M.C.

    An effective method for detecting computer misuse is the automatic auditing and analysis of on-line user activity. This activity is reflected in the system audit record, by changes in the vulnerability posture of the system configuration, and in other evidence found through active testing of the system. In 1989 we started developing an automatic misuse detection system for the Integrated Computing Network (ICN) at Los Alamos National Laboratory. Since 1990 this system has been operational, monitoring a variety of network systems and services. We call it the Network Anomaly Detection and Intrusion Reporter, or NADIR. During the last year andmore » a half, we expanded NADIR to include processing of audit and activity records for the Cray UNICOS operating system. This new component is called the UNICOS Real-time NADIR, or UNICORN. UNICORN summarizes user activity and system configuration information in statistical profiles. In near real-time, it can compare current activity to historical profiles and test activity against expert rules that express our security policy and define improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. UNICORN is currently operational on four Crays in Los Alamos` main computing network, the ICN.« less

  14. Effect of Instrument Lubricants on the Surface Degree of Conversion and Crosslinking Density of Nanocomposites.

    PubMed

    de Paula, Felipe Costa; Valentin, Regis de Souza; Borges, Boniek Castillo Dutra; Medeiros, Maria Cristina Dos Santos; de Oliveira, Raiza Freitas; da Silva, Ademir Oliveira

    2016-01-01

    The surface degree of conversion and crosslink density of composites should not be affected by the use of instrument lubricants in order to provide long-lasting tooth restorations. This study aimed to analyze the effect of instrument lubricants on the degree of conversion and crosslink density of nanocomposites. Samples (N = 10) were fabricated according to the composites (Filtek Z350 XT, 3M ESPE, St. Paul, MN, USA; and IPS Empress Direct, Ivoclar Vivadent AG, Schaan, Liechtenstein and lubricants used (Adper Single Bond 2 and Scotchbond Multi-Purpose bonding agent adhesive systems, 3M ESPE; 70% ethanol, absolute ethanol, and no lubricant). Single composite increments were inserted into a Teflon mold using the same dental instrument. The composite surface was then modeled using a brush wiped with each adhesive system and a spatula wiped with each ethanol. The control group was fabricated with no additional modeling. The surface degree of conversion and crosslink density were measured by Fourier transform infrared spectroscopy and the hardness decrease test, respectively. Data were analyzed using two-way analysis of variance and the Tukey's test (p < 0.05). Filtek Z350 XT showed statistically similar degree of conversion regardless of the lubricant used, whereas the use of adhesive systems and 70% ethanol decreased the degree of conversion for IPS Empress Direct. Only Scotchbond Multi-Purpose bonding agent decreased crosslink density for Filtek Z350 XT, whereas both adhesive systems decreased crosslink density for IPS Empress Direct. Filtek Z350 XT appeared to be less sensitive to the effects of lubricants, and absolute ethanol did not affect the degree of conversion and crosslink density of the nanocomposites tested. Although the use of lubricants may be recommended to minimize the stickiness of dental instruments and composite resin, dentists should choose materials that do not have a negative effect on the surface properties of composites. Only the use of absolute ethanol safely maintains the surface integrity of nanocomposites in comparison with adhesive system and 70% ethanol. © 2015 Wiley Periodicals, Inc.

  15. Post retention and post/core shear bond strength of four post systems.

    PubMed

    Stockton, L W; Williams, P T; Clarke, C T

    2000-01-01

    As clinicians we continue to search for a post system which will give us maximum retention while maximizing resistance to root fracture. The introduction of several new post systems, with claims of high retentive and resistance to root fracture values, require that independent studies be performed to evaluate these claims. This study tested the tensile and shear dislodgment forces of four post designs that were luted into roots 10 mm apical of the CEJ. The Para Post Plus (P1) is a parallel-sided, passive design; the Para Post XT (P2) is a combination active/passive design; the Flexi-Post (F1) and the Flexi-Flange (F2) are active post designs. All systems tested were stainless steel. This study compared the test results of the four post designs for tensile and shear dislodgment. All mounted samples were loaded in tension until failure occurred. The tensile load was applied parallel to the long axis of the root, while the shear load was applied at 450 to the long axis of the root. The Flexi-Post (F1) was significantly different from the other three in the tensile test, however, the Para Post XT (P2) was significantly different to the other three in the shear test and had a better probability for survival in the Kaplan-Meier survival function test. Based on the results of this study, our recommendation is for the Para Post XT (P2).

  16. The JPEG XT suite of standards: status and future plans

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj

    2015-09-01

    The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.

  17. Beyond a Terabyte File System

    NASA Technical Reports Server (NTRS)

    Powers, Alan K.

    1994-01-01

    The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

  18. Comparison of acute elastic recoil between the SAPIEN-XT and SAPIEN valves in transfemoral-transcatheter aortic valve replacement.

    PubMed

    Garg, Aatish; Parashar, Akhil; Agarwal, Shikhar; Aksoy, Olcay; Hammadah, Muhammad; Poddar, Kanhaiya Lal; Puri, Rishi; Svensson, Lars G; Krishnaswamy, Amar; Tuzcu, E Murat; Kapadia, Samir R

    2015-02-15

    The SAPIEN-XT is a newer generation balloon-expandable valve created of cobalt chromium frame, as opposed to the stainless steel frame used in the older generation SAPIEN valve. We sought to determine if there was difference in acute recoil between the two valves. All patients who underwent transfemoral-transcatheter aortic valve replacement using the SAPIEN-XT valve at the Cleveland Clinic were included. Recoil was measured using biplane cine-angiographic image analysis of valve deployment. Acute recoil was defined as [(valve diameter at maximal balloon inflation) - (valve diameter after deflation)]/valve diameter at maximal balloon inflation (reported as percentage). Patients undergoing SAPIEN valve implantation were used as the comparison group. Among the 23 mm valves, the mean (standard deviation-SD) acute recoil was 2.77% (1.14) for the SAPIEN valve as compared to 3.75% (1.52) for the SAPIEN XT valve (P = 0.04). Among the 26 mm valves, the mean (SD) acute recoil was 2.85% (1.4) for the SAPIEN valve as compared to 4.32% (1.63) for the SAPIEN XT valve (P = 0.01). Multivariable linear regression analysis demonstrated significantly greater adjusted recoil in the SAPIEN XT valves as compared to the SAPIEN valves by 1.43% [(95% CI: 0.69-2.17), P < 0.001]. However, the residual peak gradient was less for SAPIEN XT compared to SAPIEN valves [18.86 mm Hg versus 23.53 mm Hg (P = 0.01)]. Additionally, no difference in paravalvular leak was noted between the two valve types (P = 0.78). The SAPIEN XT valves had significantly greater acute recoil after deployment compared to the SAPIEN valves. Implications of this difference in acute recoil on valve performance need to be investigated in future studies. © 2014 Wiley Periodicals, Inc.

  19. Identifiability and Problems of Model Selection for Time-Series Analysis in Econometrics.

    DTIC Science & Technology

    1980-01-01

    Z is defined by (2.1) Fx + ( y( If x(t), T c R;dt for discrete-time, that is, with the time set T = Z = integers, a system F is given by * The...O1/14/81 cb (2.2) x(t + 1) Fx (t) + Gu(t), y(t) = Hx(t), t c Z. In (2.1-2.2), the real (or complex) vectors x, u, and y are called state, inpuI, and...compulsory. Astro - loryi has been tried. No optimist would quarrel with the declaration of one of von NE!MAfrIiI’s direct successors that "exposure to the

  20. Effect of the Polishing Procedures on Color Stability and Surface Roughness of Composite Resins

    PubMed Central

    Schmitt, Vera Lucia; Puppin-Rontani, Regina Maria; Naufel, Fabiana Scarparo; Nahsan, Flávia Pardo Salata; Alexandre Coelho Sinhoreti, Mário; Baseggio, Wagner

    2011-01-01

    Objectives. To evaluate the polishing procedures effect on color stability and surface roughness of composite resins. Methods. Specimens were distributed into 6 groups: G1: Filtek Supreme XT + PoGo; G2: Filtek Supreme XT + Sof-Lex; G3: Filtek Supreme XT + no polishing; G4: Amelogen + PoGo; G5: Amelogen + Sof-Lex.; G6: Amelogen + no polishing. Initial color values were evaluated using the CIELab scale. After polishing, surface roughness was evaluated and the specimens were stored in coffee solution at 37°C for 7 days. The final color measurement and roughness were determined. Results. Sof-Lex resulted in lower staining. Amelogen showed the highest roughness values than Filtek Supreme on baseline and final evaluations regardless of the polishing technique. Filtek Supreme polished with PoGo showed the lowest roughness values. All groups presented discoloration after storage in coffee solution, regardless of the polishing technique. Conclusion. Multiple-step polishing technique provided lower degree of discoloration for both composite resins. The final surface texture is material and technique dependent. PMID:21991483

  1. Shear Bond Strength of Three Orthodontic Bonding Systems on Enamel and Restorative Materials.

    PubMed

    Hellak, Andreas; Ebeling, Jennifer; Schauseil, Michael; Stein, Steffen; Roggendorf, Matthias; Korbmacher-Steiner, Heike

    2016-01-01

    Objective. The aim of this in vitro study was to determine the shear bond strength (SBS) and adhesive remnant index (ARI) score of two self-etching no-mix adhesives (iBond ™ and Scotchbond ™ ) on different prosthetic surfaces and enamel, in comparison with the commonly used total etch system Transbond XT ™ . Materials and Methods . A total of 270 surfaces (1 enamel and 8 restorative surfaces, n = 30) were randomly divided into three adhesive groups. In group 1 (control) brackets were bonded with Transbond XT primer. In the experimental groups iBond adhesive (group 2) and Scotchbond Universal adhesive (group 3) were used. The SBS was measured using a Zwicki 1120 ™ testing machine. The ARI and SBS were compared statistically using the Kruskal-Wallis test ( P ≤ 0.05). Results . Significant differences in SBS and ARI were found between the control group and experimental groups. Conclusions . Transbond XT showed the highest SBS on human enamel. Scotchbond Universal on average provides the best bonding on all other types of surface (metal, composite, and porcelain), with no need for additional primers. It might therefore be helpful for simplifying bonding in orthodontic procedures on restorative materials in patients. If metal brackets have to be bonded to a metal surface, the use of a dual-curing resin is recommended.

  2. Scalability Analysis of Gleipnir: A Memory Tracing and Profiling Tool, on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjusic, Tommy; Kartsaklis, Christos; Wang, Dali

    2013-01-01

    Application performance is hindered by a variety of factors but most notably driven by the well know CPU-memory speed gap (also known as the memory wall). Understanding application s memory behavior is key if we are trying to optimize performance. Understanding application performance properties is facilitated with various performance profiling tools. The scope of profiling tools varies in complexity, ease of deployment, profiling performance, and the detail of profiled information. Specifically, using profiling tools for performance analysis is a common task when optimizing and understanding scientific applications on complex and large scale systems such as Cray s XK7. This papermore » describes the performance characteristics of using Gleipnir, a memory tracing tool, on the Titan Cray XK7 system when instrumenting large applications such as the Community Earth System Model. Gleipnir is a memory tracing tool built as a plug-in tool for the Valgrind instrumentation framework. The goal of Gleipnir is to provide fine-grained trace information. The generated traces are a stream of executed memory transactions mapped to internal structures per process, thread, function, and finally the data structure or variable. Our focus was to expose tool performance characteristics when using Gleipnir with a combination of an external tools such as a cache simulator, Gl CSim, to characterize the tool s overall performance. In this paper we describe our experience with deploying Gleipnir on the Titan Cray XK7 system, report on the tool s ease-of-use, and analyze run-time performance characteristics under various workloads. While all performance aspects are important we mainly focus on I/O characteristics analysis due to the emphasis on the tools output which are trace-files. Moreover, the tool is dependent on the run-time system to provide the necessary infrastructure to expose low level system detail; therefore, we also discuss any theoretical benefits that can be achieved if such modules were present.« less

  3. Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1999-01-01

    The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.

  4. Well-posedness for a class of doubly nonlinear stochastic PDEs of divergence type

    NASA Astrophysics Data System (ADS)

    Scarpa, Luca

    2017-08-01

    We prove well-posedness for doubly nonlinear parabolic stochastic partial differential equations of the form dXt - div γ (∇Xt) dt + β (Xt) dt ∋ B (t ,Xt) dWt, where γ and β are the two nonlinearities, assumed to be multivalued maximal monotone operators everywhere defined on Rd and R respectively, and W is a cylindrical Wiener process. Using variational techniques, suitable uniform estimates (both pathwise and in expectation) and some compactness results, well-posedness is proved under the classical Leray-Lions conditions on γ and with no restrictive smoothness or growth assumptions on β. The operator B is assumed to be Hilbert-Schmidt and to satisfy some classical Lipschitz conditions in the second variable.

  5. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  6. The SGI/Cray T3E: Experiences and Insights

    NASA Technical Reports Server (NTRS)

    Bernard, Lisa Hamet

    1998-01-01

    The NASA Goddard Space Flight Center is home to the fifth most powerful supercomputer in the world, a 1024 processor SGI/Cray T3E-600. The original 512 processor system was placed at Goddard in March, 1997 as part of a cooperative agreement between the High Performance Computing and Communications Program's Earth and Space Sciences Project (ESS) and SGI/Cray Research. The goal of this system is to facilitate achievement of the Project milestones of 10, 50 and 100 GFLOPS sustained performance on selected Earth and space science application codes. The additional 512 processors were purchased in March, 1998 by the NASA Earth Science Enterprise for the NASA Seasonal to Interannual Prediction Project (NSIPP). These two "halves" still operate as a single system, and must satisfy the unique requirements of both aforementioned groups, as well as guest researchers from the Earth, space, microgravity, manned space flight and aeronautics communities. Few large scalable parallel systems are configured for capability computing, so models are hard to find. This unique environment has created a challenging system administration task, and has yielded some insights into the supercomputing needs of the various NASA Enterprises, as well as insights into the strengths and weaknesses of the T3E architecture and software. The T3E is a distributed memory system in which the processing elements (PE's) are connected by a low latency, high bandwidth bidirectional 3-D torus. Due to the focus on high speed communication between PE's, the T3E requires PE's to be allocated contiguously per job. Further, jobs will only execute on the user specified number of PE's and PE timesharing is possible but impractical. With a highly varied job mix in both size and runtime of jobs, the resulting scenario is PE fragmentation and an inability to achieve near 100% utilization. SGI/Cray has provided several scheduling and configuration tools to minimize the impact of fragmentation. These tools include PScheD (the political scheduler), GRM (the global resource manager) and NQE (the Network Queuing Environment). Features and impact of these tools will be discussed, as will resulting performance and utilization data. As a distributed memory system, the T3E is designed to be programmed through explicit message passing. Consequently, certain assumptions related to code design are made by the operating system (UNICOS/mk) and its scheduling tools. With the exception of HPF, which does run on the T3E, however poorly, alternative programming styles have the potential to impact the T3E in unexpected and undesirable ways. Several examples will be presented (preceeded with the disclaimer, "Don't try this at home! Violators will be prosecuted!")

  7. Magnon modes and magnon-vortex scattering in two-dimensional easy-plane ferromagnets

    NASA Astrophysics Data System (ADS)

    Ivanov, B. A.; Schnitzer, H. J.; Mertens, F. G.; Wysin, G. M.

    1998-10-01

    We calculate the magnon modes in the presence of a vortex on a circular system, combining analytical calculations in the continuum limit with a numerical diagonalization of the discrete system. The magnon modes are expressed by the S matrix for magnon-vortex scattering, as a function of the parameters and the size of the system and for different boundary conditions. Certain quasilocal translational modes are identified with the frequencies which appear in the trajectory X-->(t) of the vortex center in recent molecular dynamics simulations of the full many-spin model. Using these quasilocal modes we calculate the two parameters of a third-order equation of motion for X-->(t). This equation was recently derived by a collective variable theory and describes very well the trajectories observed in the simulations. Both parameters, the vortex mass and the factor in front of X-->⃛, depend strongly on the boundary conditions.

  8. Performance evaluation of the Abbott CELL-DYN Ruby and the Sysmex XT-2000i haematology analysers.

    PubMed

    Leers, M P G; Goertz, H; Feller, A; Hoffmann, J J M L

    2011-02-01

    Two mid-range haematology analysers (Abbott CELL-DYN Ruby and Sysmex XT-2000i) were evaluated to determine their analytical performance and workflow efficiency in the haematology laboratory. In total 418 samples were processed for determining equivalence of complete blood count (CBC) measurements, and 100 for reticulocyte comparison. Blood smears served for assessing the agreement of the differential counts. Inter-instrument agreement for most parameters was good although small numbers of discrepancies were observed. Systematic biases were found for mean cell volume, reticulocytes, platelets and mean platelet volume. CELL-DYN Ruby WBC differentials were obtained with all samples while the XT-2000i suppressed differentials partially or completely in 13 samples (3.1%). WBC subpopulation counts were otherwise in good agreement with no major outliers. Following first-pass CBC/differential analysis, 88 (21%) of XT-2000i samples required further analyser processing compared to 18 (4.3%) for the CELL-DYN Ruby. Smear referrals for suspected WBC/nucleated red blood cells and platelet abnormalities were indicated for 106 (25.4%) and 95 (22.7%) of the XT-2000i and CELL-DYN Ruby samples respectively. Flagging efficiencies for both analysers were found to be similar. The Sysmex XT-2000i and Abbott CELL-DYN Ruby analysers have broadly comparable analytical performance, but the CELL-DYN Ruby showed superior first-pass efficiency. © 2010 Blackwell Publishing Ltd.

  9. 5nsec Dead time multichannel scaling system for Mössbauer spectrometer

    NASA Astrophysics Data System (ADS)

    Verrastro, C.; Trombetta, G.; Pita, A.; Saragovi, C.; Duhalde, S.

    1991-11-01

    A PC programmable and fast multichannel scaling module has been designed to use a commercial Mössbauer spectrometer. This module is based on a 10 single chip 8 bits microcomputer (MC6805) and on a 35 fast ALU, which allows a high performance and low cost system. The module can operate in a stand-alone mode. Data analysis are performed in real time display, on XT/AT IBM PC or compatibles. The channels are ranged between 256 and 4096, the maximum number of counts is 232-1 per channel, the dwell time is 3 μsec and the dead time between channels is 5 nsec. A friendly software display the real time spectrum and offers menues with different options at each state.

  10. OpenSHMEM-UCX : Evaluation of UCX for implementing OpenSHMEM Programming Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Matthew B; Gorentla Venkata, Manjunath; Aderholdt, William Ferrol

    2016-01-01

    The OpenSHMEM reference implementation was developed towards the goal of developing an open source and high-performing Open- SHMEM implementation. To achieve portability and performance across various networks, the OpenSHMEM reference implementation uses GAS- Net and UCCS for network operations. Recently, new network layers have emerged with the promise of providing high-performance, scalabil- ity, and portability for HPC applications. In this paper, we implement the OpenSHMEM reference implementation to use the UCX framework for network operations. Then, we evaluate its performance and scalabil- ity on Cray XK systems to understand UCX s suitability for developing the OpenSHMEM programming model. Further, wemore » develop a bench- mark called SHOMS for evaluating the OpenSHMEM implementation. Our experimental results show that OpenSHMEM-UCX outperforms the vendor supplied OpenSHMEM implementation in most cases on the Cray XK system by up to 40% with respect to message rate and up to 70% for the execution of application kernels.« less

  11. Performance of the Cray T3D and Emerging Architectures on Canopy QCD Applications

    NASA Astrophysics Data System (ADS)

    Fischler, Mark; Uchima, Mike

    1996-03-01

    The Cray T3D, an MIMD system with NUMA shared memory capabilities and in principle very low communications latency, can support the Canopy framework for grid-oriented applications. CANOPY has been ported to the T3D, with the intent of making it available to a spectrum of users. The performance of the T3D running Canopy has been benchmarked on five QCD applications extensively run on ACPMAPS at Fermilab, requiring a variety of data access patterns. The net performance and scaling behavior reveals an efficiency relative to peak Gflops almost identical to that achieved on ACPMAPS. Detailed studies of the major factors impacting performance are presented. Generalizations applying this analysis to the newly emerging crop of commercial systems reveal where their limitations will lie. On these applications, efficiencies of above 25% are not to be expected; eliminating overheads due to Canopy will improve matters, but by less than a factor of two.

  12. Performance of the Cray T3D and emerging architectures on canopy QCD applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, M.; Uchima, M.

    1995-11-01

    The Cray T3D, an MIMD system with NUMA shared memory capabilities and in principle very low communications latency, can support the Canopy framework for grid-oriented applications. CANOPY has been ported to the T3D, with the intent of making it available to a spectrum of users. The performance of the T3D running Canopy has been benchmarked on five QCD applications extensively run on ACPMAPS at Fermilab, requiring a variety of data access patterns. The net performance and scaling behavior reveals an efficiency relative to peak Gflops almost identical to that achieved on ACPMAPS. Detailed studies of the major factors impacting performancemore » are presented. Generalizations applying this analysis to the newly emerging crop of commercial systems reveal where their limitations will lie. On these applications, efficiencies of above 25% are not to be expected; eliminating overheads due to Canopy will improve matters, but by less than a factor of two.« less

  13. TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITH NASADIG)

    NASA Technical Reports Server (NTRS)

    Anderson, G. E.

    1994-01-01

    The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.

  14. TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITHOUT NASADIG)

    NASA Technical Reports Server (NTRS)

    Vogt, R. A.

    1994-01-01

    The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.

  15. Sparse Bayesian Information Filters for Localization and Mapping

    DTIC Science & Technology

    2008-02-01

    a set of smaller, more manageable maps [76, 51, 139, 77, 12]. These appropriately-named submap algorithms greatly reduce the effects of map size on...An intuitive way of dealing with this limitation is to divide the world into numerous sub-environments, each comprised of a more manageable number of...p (xt, M I z t , u t) = p (M I xt, zt) • p (xt zt, ut) (2.16) 6 This assumes knowledge of the mean, which is necessary for observations that are

  16. The extent of aortic annulus calcification is a predictor of postprocedural eccentricity and paravalvular regurgitation: a pre- and postinterventional cardiac computed tomography angiography study.

    PubMed

    Bekeredjian, Raffi; Bodingbauer, Dorothea; Hofmann, Nina P; Greiner, Sebastian; Schuetz, Moritz; Geis, Nicolas A; Kauczor, Hans U; Bryant, Mark; Chorianopoulos, Emmanuel; Pleger, Sven T; Mereles, Derliz; Katus, Hugo A; Korosoglou, Grigorios

    2015-03-01

    To investigate if the extent of aortic valve calcification is associated with postprocedural prosthesis eccentricity and paravalvular regurgitation (PAR) in patients undergoing transcatheter aortic valve implantation (TAVI). Cardiac computed tomography angiography (CCTA) was performed before and 3 months after TAVI in 46 patients who received the self-expanding CoreValve and in 22 patients who underwent balloon-expandable Edwards Sapien XT implantation. Aortic annulus calcification was measured with CCTA prior to TAVI and prosthesis eccentricity was assessed with post-TAVI CCTA. Standard echocardiography was also performed in all patients at 3-month follow-up exam. Annulus eccentricity was reduced during TAVI using both implantation systems (from 0.23 ± 0.06 to 0.18 ± 0.07 using CoreValve and from 0.20 ± 0.07 to 0.05 ± 0.03 using Edwards Sapien XT; P<.001 for both). With Edwards Sapien XT, eccentricity reduction at the level of the aortic annulus was significantly higher compared with CoreValve (P<.001). Annulus eccentricity after CoreValve use was significantly related to absolute valve calcification and to valve calcification indexed to body surface area (BSA) (r = 0.48 and 0.50, respectively; P<.001 for both). Furthermore, a significant association was observed between aortic valve calcification and PAR (P<.01 by ANOVA) in patients who received CoreValve. Using ROC analysis, a cut-off value over 913 mm² aortic valve calcification predicted the occurrence of moderate or severe PAR with a sensitivity of 92% and a specificity of 63% (area under the curve = 0.75). Furthermore, multivariable analysis showed that aortic valve calcification was a robust predictor of postprocedural eccentricity and PAR, independent of the aortic annulus size and native valve eccentricity and of CoreValve prosthesis size (adjusted r = 0.46 and 0.50, respectively; P<.01 for both). Such associations were not present with the Edwards Sapien XT system. The extent of native aortic annulus calcification is predictive for postprocedural prosthesis eccentricity and PAR, which is an important marker for long-term mortality in patients undergoing TAVI. This observation applies for the CoreValve, but not for the Edwards Sapien XT valve.

  17. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  18. Prediction of the Ignition Phases in Aeronautical and Laboratory Burners using Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Gicquel, L. Y. M.; Staffelbach, G.; Sanjose, M.; Boileau, M.

    2009-12-01

    Being able to ignite or reignite a gas turbine engine in a cold and rarefied atmosphere is a critical issue for many aeronautical gas turbine manufacturers. From a fundamental point of view, the ignition of the first burner and the flame propagation from one burner to another are two phenomena that are usually not studied. The present work presents on-going and past Large Eddy Simulations (LES) on this specific subject and as investigated at CERFACS (European Centre for Research and Advanced Training in Scientific Computation) located in Toulouse, France. Validation steps and potential difficulties are underlined to ensure reliability of LES for such problems. Preliminary LES results on simple burners are then presented, followed by simulations of a complete ignition sequence in an annular helicopter chamber. For all cases and when possible, two-phase or purely gaseous LES have been applied to the experimentally simplified or the full geometries. For the latter, massively parallel computing (700 processors on a Cray XT3 machine) was essential to perform the computation. Results show that liquid fuel injection has a strong influence on the ignition times and the rate at which the flame progresses from burner to burner. The propagation speed characteristic of these phenomena is much higher than the turbulent flame speed. Based on an in-depth analysis of the computational data, the difference in speed is mainly identified as being due to thermal expansion and the flame speed is strongly modified by the main burner aerodynamics issued by the swirled injection.

  19. Dynamic overset grid communication on distributed memory parallel processors

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Weeratunga, Sisira K.; Meakin, Robert L.

    1993-01-01

    A parallel distributed memory implementation of intergrid communication for dynamic overset grids is presented. Included are discussions of various options considered during development. Results are presented comparing an Intel iPSC/860 to a single processor Cray Y-MP. Results for grids in relative motion show the iPSC/860 implementation to be faster than the Cray implementation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Painter, J.; McCormick, P.; Krogh, M.

    This paper presents the ACL (Advanced Computing Lab) Message Passing Library. It is a high throughput, low latency communications library, based on Thinking Machines Corp.`s CMMD, upon which message passing applications can be built. The library has been implemented on the Cray T3D, Thinking Machines CM-5, SGI workstations, and on top of PVM.

  1. Microcomputer-Based Genetics Office Database System

    PubMed Central

    Cutts, James H.; Mitchell, Joyce A.

    1985-01-01

    A database management system (Genetics Office Automation System, GOAS) has been developed for the Medical Genetics Unit of the University of Missouri. The system, which records patients' visits to the Unit's genetic and prenatal clinics, has been implemented on an IBM PC/XT microcomputer. A description of the system, the reasons for implementation, its databases, and uses are presented.

  2. Application of Strep-Tactin XT for affinity purification of Twin-Strep-tagged CB2, a G protein-coupled cannabinoid receptor

    PubMed Central

    Yeliseev, Alexei; Zoubak, Lioudmila; Schmidt, Thomas G.M.

    2017-01-01

    Human cannabinoid receptor CB2 belongs to the class A of G protein-coupled receptor (GPCR). High resolution structural studies of CB2 require milligram quantities of purified, structurally intact protein. Here we describe an efficient protocol for purification of this protein using the Twin-Strep-tag/Strep-Tactin XT system. To improve the affinity of interaction of the recombinant CB2 with the resin, the double repeat of the Strep-tag was attached either to the N- or C-terminus of CB2 via a short linker. The CB2 was isolated at high purity from dilute solutions containing high concentrations of detergents, glycerol and salts, by capturing onto the Strep-Tactin XT resin, and was eluted from the resin under mild conditions upon addition of biotin. Surface plasmon resonance studies performed demonstrate the high affinity of interaction between the Twin-Strep-tag fused to the CB2 and Strep-Tactin XT with an estimated Kd in the low nanomolar range. The affinity of binding did not vary significantly in response to the position of the tag at either N- or C-termini of the fusion. The variation in the length of the linker between the double repeats of the Strep-tag from 6 to 12 amino acid residues did not significantly affect the binding. The novel purification protocol reported here enables efficient isolation of a recombinant GPCR expressed at low titers in host cells. This procedure is suitable for preparation of milligram quantities of stable isotope-labelled receptor for high-resolution NMR studies. PMID:27867058

  3. Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.

  4. Beyond the Face of Race: Emo-Cognitive Explorations of White Neurosis and Racial Cray-Cray

    ERIC Educational Resources Information Center

    Matias, Cheryl E.; DiAngelo, Robin

    2013-01-01

    In this article, the authors focus on the emotional and cognitive context that underlies whiteness. They employ interdisciplinary approaches of critical Whiteness studies and critical race theory to entertain how common White responses to racial material stem from the need for Whites to deny race, a traumatizing process that begins in childhood.…

  5. Implementing dense linear algebra algorithms using multitasking on the CRAY X-MP-4 (or approaching the gigaflop)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Hewitt, T.

    1985-08-01

    This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.

  6. Early MIMD experience on the CRAY X-MP

    NASA Astrophysics Data System (ADS)

    Rhoades, Clifford E.; Stevens, K. G.

    1985-07-01

    This paper describes some early experience with converting four physics simulation programs to the CRAY X-MP, a current Multiple Instruction, Multiple Data (MIMD) computer consisting of two processors each with an architecture similar to that of the CRAY-1. As a multi-processor, the CRAY X-MP together with the high speed Solid-state Storage Device (SSD) in an ideal machine upon which to study MIMD algorithms for solving the equations of mathematical physics because it is fast enough to run real problems. The computer programs used in this study are all FORTRAN versions of original production codes. They range in sophistication from a one-dimensional numerical simulation of collisionless plasma to a two-dimensional hydrodynamics code with heat flow to a couple of three-dimensional fluid dynamics codes with varying degrees of viscous modeling. Early research with a dual processor configuration has shown speed-ups ranging from 1.55 to 1.98. It has been observed that a few simple extensions to FORTRAN allow a typical programmer to achieve a remarkable level of efficiency. These extensions involve the concept of memory local to a concurrent subprogram and memory common to all concurrent subprograms.

  7. System and method for constructing filters for detecting signals whose frequency content varies with time

    DOEpatents

    Qian, Shie; Dunham, Mark E.

    1996-01-01

    A system and method for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos{2.pi..phi.(t)} and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {.phi.'(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series (also known as the Gabor spectrogram). The joint time-frequency transformation represents the analyzed signal energy at time t and frequency .function., P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function .phi.'(t) which best fits the multivalued function f(t), a trajectory of the joint time-frequency domain representation of x(t). Integrating .phi.'(t) along t yields .phi.(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template.

  8. Optimization of Supercomputer Use on EADS II System

    NASA Technical Reports Server (NTRS)

    Ahmed, Ardsher

    1998-01-01

    The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.

  9. First Detection of the Hatchett-McCray Effect in the High-Mass X-ray Binary

    NASA Technical Reports Server (NTRS)

    Sonneborn, G.; Iping, R. C.; Kaper, L.; Hammerschiag-Hensberge, G.; Hutchings, J. B.

    2004-01-01

    The orbital modulation of stellar wind UV resonance line profiles as a result of ionization of the wind by the X-ray source has been observed in the high-mass X-ray binary 4U1700-37/HD 153919 for the first time. Far-UV observations (905-1180 Angstrom, resolution 0.05 Angstroms) were made at the four quadrature points of the binary orbit with the Far Ultraviolet Spectroscopic Explorer (FUSE) in 2003 April and August. The O6.5 laf primary eclipses the X-ray source (neutron star or black hole) with a 3.41-day period. Orbital modulation of the UV resonance lines, resulting from X-ray photoionization of the dense stellar wind, the so-called Hatchett-McCray (HM) effect, was predicted for 4U1700-37/HD153919 (Hatchett 8 McCray 1977, ApJ, 211, 522) but was not seen in N V 1240, Si IV 1400, or C IV 1550 in IUE and HST spectra. The FUSE spectra show that the P V 1118-1128 and S IV 1063-1073 P-Cygni lines appear to vary as expected for the HM effect, weakest at phase 0.5 (X-ray source conjunction) and strongest at phase 0.0 (X-ray source eclipse). The phase modulation of the O VI 1032-1037 lines, however, is opposite to P V and S IV, implying that O VI may be a byproduct of the wind's ionization by the X-ray source. Such variations were not observed in N V, Si IV, and C IV because of their high optical depth. Due to their lower cosmic abundance, the P V and S IV wind lines are unsaturated, making them excellent tracers of the ionization conditions in the O star's wind.

  10. Experiences and results multitasking a hydrodynamics code on global and local memory machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.

    1987-01-01

    A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multitasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained.more » The problems of debugging on the different machines are also described.« less

  11. Role of TiF4 in Microleakage of Silorane and Methacrylate-based Composite Resins in Class V Cavities.

    PubMed

    Koohpeima, Fatemeh; Sharafeddin, Farahnaz; Jowkar, Zahra; Ahmadzadeh, Samaneh; Mokhtari, Mohammad Javad; Azarian, Babak

    2016-03-01

    This study investigated the effect of TiF4 solution pretreat-ment on microleakage of silorane and nanofilled methacrylate-based composites in class V cavities. Forty-eight intact premolar teeth were randomly allocated to four groups of 12 teeth. Restorative techniques after standard class V tooth preparations were as follows: Group 1, Filtek P90 composite; group 2, Filtek Z350 XT; group 3, TiF4 solution pretreatment and Filtek P90 composite; group 4, TiF4 solution pretreatment and Filtek Z350 XT. After storing the specimens in distilled water at 37°C for 24 hours and followed by immersion of the specimens in a 0.5% basic-fuchsin solution for 24 hours, they were sectioned buccolingually to obtain four surfaces for each specimen for analysis of microleakage using a stereomicroscope. Data analysis was performed using Kruskal-Wallis test to compare the four groups and the Mann-Whitney test for paired comparisons with Statistical Package for the Social Sciences (SPSS) version 17 software. At the enamel margins, microleakage score of the Filtek Z350 XT group was lower than those of the Filtek P90 with and without the application of the TiF4 (p = 0.009 and p = 0.031 respectively). At the dentin margins, groups 3 and 4 (TiF4+Filtek P90 and TiF4+Filtek z350 XT respectively) showed significantly lower microleakage than group 1 (Filtek P90). However, there was no significant difference between other groups (p > 0.05). At the enamel margins, microleakage score of the silorane-based composite was more than that of the nanofilled composite. No significant differences were observed between the other groups. At the dentin margins, for the silorane-based composite restorations, TiF4 solution pretreatment resulted in significantly lower microleakage. However, the similar result was not observed for Filtek Z350 XT. Also, no significant difference was observed between microleakage scores of Filtek P90 and Filtek Z350 XT with or without TiF4 pretreatment. In spite of better mechanical and physical properties of modern composites than earlier methacrylate-based composites, polymerization shrinkage has been remaining as one of the main shortcomings of them. Different methods, such as using new low shrinkage resin composites and different dentin pretreatments, have been suggested to overcome this problem. This study evaluated the effect of TiF4 as pretreatment on microleakage of class V tooth preparations restored with a nanocomposite and a silorane-based resin composite.

  12. FORTRAN multitasking library for use on the ELXSI 6400 and the CRAY XMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.

    1985-07-16

    A library of FORTRAN-based multitasking routines has been written for the ELXSI 6400 and the CRAY XMP. This library is designed to make multitasking codes easily transportable between machines with different hardware configurations. The library provides enhanced error checking and diagnostics over vendor-supplied multitasking intrinsics. The library also contains multitasking control structures not normally supplied by the vendor.

  13. The open quantum Brownian motions

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2014-09-01

    Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises

  14. System Engineering Concept Demonstration, Interface Standards Studies. Volume 4

    DTIC Science & Technology

    1992-12-01

    Xerox’s Palo Alto Research Center (PARC) 25 begat the Xerox Star; Steve Jobs visited PARC, saw the Star, went back to Apple, and begat the Mac. But...Author of Adobe Systems, PostScript Language Program Design, has left Adobe to join Steve Jobs ’ NeXT, Inc. Reid worked for Adobe Systems for four and a

  15. Gli3-mediated somitic Fgf10 expression gradients are required for the induction and patterning of mammary epithelium along the embryonic axes.

    PubMed

    Veltmaat, Jacqueline M; Relaix, Frédéric; Le, Lendy T; Kratochwil, Klaus; Sala, Frédéric G; van Veelen, Wendy; Rice, Ritva; Spencer-Dene, Bradley; Mailleux, Arnaud A; Rice, David P; Thiery, Jean Paul; Bellusci, Saverio

    2006-06-01

    Little is known about the regulation of cell fate decisions that lead to the formation of five pairs of mammary placodes in the surface ectoderm of the mouse embryo. We have previously shown that fibroblast growth factor 10 (FGF10) is required for the formation of mammary placodes 1, 2, 3 and 5. Here, we have found that Fgf10 is expressed only in the somites underlying placodes 2 and 3, in gradients across and within these somites. To test whether somitic FGF10 is required for the formation of these two placodes, we analyzed a number of mutants with different perturbations of somitic Fgf10 gradients for the presence of WNT signals and ectodermal multilayering, markers for mammary line and placode formation. The mammary line is displaced dorsally, and formation of placode 3 is impaired in Pax3ILZ/ILZ mutants, which do not form ventral somitic buds. Mammary line formation is impaired and placode 3 is absent in Gli3Xt-J/Xt-J and hypomorphic Fgf10 mutants, in which the somitic Fgf10 gradient is shortened dorsally and less overall Fgf10 is expressed, respectively. Recombinant FGF10 rescued mammogenesis in Fgf10(-/-) and Gli3Xt-J/Xt-J flanks. We correlate increasing levels of somitic FGF10 with progressive maturation of the surface ectoderm, and show that full expression of somitic Fgf10, co-regulated by GLI3, is required for the anteroposterior pattern in which the flank ectoderm acquires a mammary epithelial identity. We propose that the intra-somitic Fgf10 gradient, together with ventral elongation of the somites, determines the correct dorsoventral position of mammary epithelium along the flank.

  16. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  17. Approximating Smooth Step Functions Using Partial Fourier Series Sums

    DTIC Science & Technology

    2012-09-01

    interp1(xt(ii), smoothstepbez( t(ii), min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); ii = find( abs(t - tau/2) <= epi ); iii = t(ii...interp1( xt(ii), smoothstepbez( rt, min(rt), max(rt), ’y’), t(ii), ’linear’, ’ extrap ’ ); % stepm(ii) = 1 - interp1(xt(ii), smoothstepbez( t(ii...min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); In this case, because x is also defined as a function of the independent parameter

  18. Chlorhexidine stabilizes the adhesive interface: a 2 year in vitro study

    PubMed Central

    Breschi, Lorenzo; Mazzoni, Annalisa; Nato, Fernando; Carrilho, Marcela; Visintini, Erika; Tjäderhane, Leo; Ruggeri, Alessandra; Tay, Franklin R; De Stefano Dorigo, Elettra; Pashley, David H

    2013-01-01

    Objectives This study evaluated the role of endogenous dentin MMPs in auto-degradation of collagen fibrils within adhesive-bonded interfaces. The null hypotheses tested were that adhesive blends or chlorhexidine digluconate (CHX) application does not modify dentin MMPs activity and that CHX used as therapeutic primer does not improve the stability of adhesive interfaces over time. Methods Zymograms of protein extracts from human dentin powder incubated with Adper Scotchbond 1XT (SB1XT) on untreated or 0.2–2% CHX treated dentin were obtained to assay dentin MMPs activity. Microtensile bond strength and interfacial nanoleakage expression of SB1XT bonded interfaces (with or without CHX pre-treatment for 30s on the etched surface) were analyzed immediately and after 2 yr of storage in artificial saliva at 37°C. Results Zymograms showed that application of SB1XT to human dentin powder increases MMP-2 activity, while CHX pre-treatment inhibited all dentin gelatinolytic activity, irrespective from the tested concentration. CHX significantly lowered the loss of bond strength and nanoleakage seen in acid-etched resin-bonded dentin artificially aged for 2 yr. Significance The study demonstrates the active role of SB1XT in dentin MMP-2 activation and the efficacy of CHX inhibition of MMPs even if used at low concentration (0.2%). PMID:20045177

  19. Multi-Scale Correlative Tomography of a Li-Ion Battery Composite Cathode

    PubMed Central

    Moroni, Riko; Börner, Markus; Zielke, Lukas; Schroeder, Melanie; Nowak, Sascha; Winter, Martin; Manke, Ingo; Zengerle, Roland; Thiele, Simon

    2016-01-01

    Focused ion beam/scanning electron microscopy tomography (FIB/SEMt) and synchrotron X-ray tomography (Xt) are used to investigate the same lithium manganese oxide composite cathode at the same specific spot. This correlative approach allows the investigation of three central issues in the tomographic analysis of composite battery electrodes: (i) Validation of state-of-the-art binary active material (AM) segmentation: Although threshold segmentation by standard algorithms leads to very good segmentation results, limited Xt resolution results in an AM underestimation of 6 vol% and severe overestimation of AM connectivity. (ii) Carbon binder domain (CBD) segmentation in Xt data: While threshold segmentation cannot be applied for this purpose, a suitable classification method is introduced. Based on correlative tomography, it allows for reliable ternary segmentation of Xt data into the pore space, CBD, and AM. (iii) Pore space analysis in the micrometer regime: This segmentation technique is applied to an Xt reconstruction with several hundred microns edge length, thus validating the segmentation of pores within the micrometer regime for the first time. The analyzed cathode volume exhibits a bimodal pore size distribution in the ranges between 0–1 μm and 1–12 μm. These ranges can be attributed to different pore formation mechanisms. PMID:27456201

  20. Scalable Vector Media-processors for Embedded Systems

    DTIC Science & Technology

    2002-05-01

    Set Architecture for Multimedia “When you do the common things in life in an uncommon way, you will command the attention of the world.” George ...Bibliography [ABHS89] M. August, G. Brost , C. Hsiung, and C. Schiffleger. Cray X-MP: The Birth of a Super- computer. IEEE Computer, 22(1):45–52, January

  1. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  2. Full speed ahead for software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, A.

    1986-03-10

    Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program tomore » run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.« less

  3. Experiences with Cray multi-tasking

    NASA Technical Reports Server (NTRS)

    Miya, E. N.

    1985-01-01

    The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.

  4. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  5. The International Conference on Vector and Parallel Computing (2nd)

    DTIC Science & Technology

    1989-01-17

    Computation of the SVD of Bidiagonal Matrices" ...................................... 11 " Lattice QCD -As a Large Scale Scientific Computation...vectorizcd for the IBM 3090 Vector Facility. In addition, elapsed times " Lattice QCD -As a Large Scale Scientific have been reduced by using 3090...benchmarked Lattice QCD on a large number ofcompu- come from the wavefront solver routine. This was exten- ters: CrayX-MP and Cray 2 (vector

  6. CDC to CRAY FORTRAN conversion manual

    NASA Technical Reports Server (NTRS)

    Mcgary, C.; Diebert, D.

    1983-01-01

    Documentation describing software differences between two general purpose computers for scientific applications is presented. Descriptions of the use of the FORTRAN and FORTRAN 77 high level programming language on a CDC 7600 under SCOPE and a CRAY XMP under COS are offered. Itemized differences of the FORTRAN language sets of the two machines are also included. The material is accompanied by numerous examples of preferred programming techniques for the two machines.

  7. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  8. Anxa4 Genes are Expressed in Distinct Organ Systems in Xenopus laevis and tropicalis But are Functionally Conserved

    PubMed Central

    Massé, Karine L; Collins, Robert J; Bhamra, Surinder; Seville, Rachel A

    2007-01-01

    Anxa4 belongs to the multigenic annexin family of proteins which are characterized by their ability to interact with membranes in a calcium-dependent manner. Defined as a marker for polarized epithelial cells, Anxa4 is believed to be involved in many cellular processes but its functions in vivo are still poorly understood. Previously, we cloned Xanx4 in Xenopus laevis (now referred to as anxa4a) and demonstrated its role during organogenesis of the pronephros, providing the first evidence of a specific function for this protein during the development of a vertebrate. Here, we describe the strict conservation of protein sequence and functional domains of anxa4 during vertebrate evolution. We also identify the paralog of anxa4a, anxa4b and show its specific temporal and spatial expression pattern is different from anxa4a. We show that anxa4 orthologs in X. laevis and tropicalis display expression domains in different organ systems. Whilst the anxa4a gene is mainly expressed in the kidney, Xt anxa4 is expressed in the liver. Finally, we demonstrate Xt anxa4 and anxa4a can display conserved function during kidney organogenesis, despite the fact that Xt anxa4 transcripts are not expressed in this domain. This study highlights the divergence of expression of homologous genes during Xenopus evolution and raises the potential problems of using X. tropicalis promoters in X. laevis. PMID:19279706

  9. A Spectral Element Ocean Model on the Cray T3D: the interannual variability of the Mediterranean Sea general circulation

    NASA Astrophysics Data System (ADS)

    Molcard, A. J.; Pinardi, N.; Ansaloni, R.

    A new numerical model, SEOM (Spectral Element Ocean Model, (Iskandarani et al, 1994)), has been implemented in the Mediterranean Sea. Spectral element methods combine the geometric flexibility of finite element techniques with the rapid convergence rate of spectral schemes. The current version solves the shallow water equations with a fifth (or sixth) order accuracy spectral scheme and about 50.000 nodes. The domain decomposition philosophy makes it possible to exploit the power of parallel machines. The original MIMD master/slave version of SEOM, written in F90 and PVM, has been ported to the Cray T3D. When critical for performance, Cray specific high-performance one-sided communication routines (SHMEM) have been adopted to fully exploit the Cray T3D interprocessor network. Tests performed with highly unstructured and irregular grid, on up to 128 processors, show an almost linear scalability even with unoptimized domain decomposition techniques. Results from various case studies on the Mediterranean Sea are shown, involving realistic coastline geometry, and monthly mean 1000mb winds from the ECMWF's atmospheric model operational analysis from the period January 1987 to December 1994. The simulation results show that variability in the wind forcing considerably affect the circulation dynamics of the Mediterranean Sea.

  10. Taming parallel I/O complexity with auto-tuning

    DOE PAGES

    Behzad, Babak; Luu, Huong Vu Thanh; Huchette, Joseph; ...

    2013-11-17

    We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, andmore » 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. In conclusion, we consistently demonstrate I/O write speedups between 2x and 100x for test configurations.« less

  11. Polymerization shrinkage stresses in different restorative techniques for non-carious cervical lesions.

    PubMed

    de Oliveira Correia, Ayla Macyelle; Tribst, João Paulo Mendes; de Souza Matos, Felipe; Platt, Jeffrey A; Caneppele, Taciana Marco Ferraz; Borges, Alexandre Luiz Souto

    2018-06-20

    This study evaluated the effect of different restorative techniques for non-carious cervical lesions (NCCL) on polymerization shrinkage stress of resins using three-dimensional (3D) finite element analysis (FEA). 3D-models of a maxillary premolar with a NCCL restored with different filling techniques (bulk filling and incremental) were generated to be compared by nonlinear FEA. The bulk filling technique was used for groups B (NCCL restored with Filtek™ Bulk Fill) and C (Filtek™ Z350 XT). The incremental technique was subdivided according to mode of application: P (2 parallel increments of the Filtek™ Z350 XT), OI (2 oblique increments of the Filtek™ Z350 XT, with incisal first), OIV (2 oblique increments of the Filtek™ Z350 XT, with incisal first and increments with the same volume), OG (2 oblique increments of the Filtek™ Z350 XT, with gingival first) and OGV (2 oblique increments of the Filtek™ Z350 XT, with gingival first and increments with the same volume), resulting in 7 models. All materials were considered isotropic, elastic and linear. The results were expressed in maximum principal stress (MPS). The tension stress distribution was influenced by the restorative technique. The lowest stress concentration occurred in group B followed by OG, OGV, OI, OIV, P and C; the incisal interface was more affected than the gingival. The restoration of NCCLs with bulk fill composite resulted in lower shrinkage stress in the gingival and incisal areas, followed by incremental techniques with the initial increment placed on the gingival wall. The non-carious cervical lesions (NCCLs) restored with bulk fill composite have a more favorable biomechanical behavior. Copyright © 2018. Published by Elsevier Ltd.

  12. Perturbation theory for fractional Brownian motion in presence of absorbing boundaries.

    PubMed

    Wiese, Kay Jörg; Majumdar, Satya N; Rosso, Alberto

    2011-06-01

    Fractional Brownian motion is a Gaussian process x(t) with zero mean and two-time correlations (x(t(1))x(t(2)))=D(t(1)(2H)+t(2)(2H)-|t(1)-t(2)|(2H)), where H, with 0

  13. Development of a Dynamic Time Sharing Scheduled Environment Final Report CRADA No. TC-824-94E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M.; Caliga, D.

    Massively parallel computers, such as the Cray T3D, have historically supported resource sharing solely with space sharing. In that method, multiple problems are solved by executing them on distinct processors. This project developed a dynamic time- and space-sharing scheduler to achieve greater interactivity and throughput than could be achieved with space-sharing alone. CRI and LLNL worked together on the design, testing, and review aspects of this project. There were separate software deliverables. CFU implemented a general purpose scheduling system as per the design specifications. LLNL ported the local gang scheduler software to the LLNL Cray T3D. In this approach, processorsmore » are allocated simultaneously to aU components of a parallel program (in a “gang”). Program execution is preempted as needed to provide for interactivity. Programs are also reIocated to different processors as needed to efficiently pack the computer’s torus of processors. In phase one, CRI developed an interface specification after discussions with LLNL for systemlevel software supporting a time- and space-sharing environment on the LLNL T3D. The two parties also discussed interface specifications for external control tools (such as scheduling policy tools, system administration tools) and applications programs. CRI assumed responsibility for the writing and implementation of all the necessary system software in this phase. In phase two, CRI implemented job-rolling on the Cray T3D, a mechanism for preempting a program, saving its state to disk, and later restoring its state to memory for continued execution. LLNL ported its gang scheduler to the LLNL T3D utilizing the CRI interface implemented in phases one and two. During phase three, the functionality and effectiveness of the LLNL gang scheduler was assessed to provide input to CRI time- and space-sharing, efforts. CRI will utilize this information in the development of general schedulers suitable for other sites and future architectures.« less

  14. Performance Analysis of the Unitree Central File

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Flater, David

    1994-01-01

    This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.

  15. Evaluation of a Fully 3-D Bpf Method for Small Animal PET Images on Mimd Architectures

    NASA Astrophysics Data System (ADS)

    Bevilacqua, A.

    Positron Emission Tomography (PET) images can be reconstructed using Fourier transform methods. This paper describes the performance of a fully 3-D Backprojection-Then-Filter (BPF) algorithm on the Cray T3E machine and on a cluster of workstations. PET reconstruction of small animals is a class of problems characterized by poor counting statistics. The low-count nature of these studies necessitates 3-D reconstruction in order to improve the sensitivity of the PET system: by including axially oblique Lines Of Response (LORs), the sensitivity of the system can be significantly improved by the 3-D acquisition and reconstruction. The BPF method is widely used in clinical studies because of its speed and easy implementation. Moreover, the BPF method is suitable for on-time 3-D reconstruction as it does not need any sinogram or rearranged data. In order to investigate the possibility of on-line processing, we reconstruct a phantom using the data stored in the list-mode format by the data acquisition system. We show how the intrinsically parallel nature of the BPF method makes it suitable for on-line reconstruction on a MIMD system such as the Cray T3E. Lastly, we analyze the performance of this algorithm on a cluster of workstations.

  16. Near-Range Receiver Unit of Next Generation PollyXT Used with Koldeway Aerosol Raman Lidar in Arctic

    NASA Astrophysics Data System (ADS)

    Stachlewska, Iwona S.; Markowicz, Krzysztof M.; Ritter, Christoph; Neuber, Roland; Heese, Birgit; Engelmann, Ronny; Linne, Holger

    2016-06-01

    The Near-range Aerosol Raman lidar (NARLa) receiver unit, that was designed to enhance the detection range of the NeXT generation PollyXT Aerosol-Depolarization-Raman (ADR) lidar of the University of Warsaw, was employed next the Koldeway Aerosol Raman Lidar (KARL) at the AWI-IPEV German-French station in Arctic during Spring 2015. Here we introduce shortly design of both lidars, the scheme of their installation next to each other, and preliminary results of observations aiming at arctic haze investigation by the lidars and the iCAP a set of particle counter and aethalometer installed under a tethered balloon.

  17. A parallel algorithm for generation and assembly of finite element stiffness and mass matrices

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Carmona, E. A.; Nguyen, D. T.; Baddourah, M. A.

    1991-01-01

    A new algorithm is proposed for parallel generation and assembly of the finite element stiffness and mass matrices. The proposed assembly algorithm is based on a node-by-node approach rather than the more conventional element-by-element approach. The new algorithm's generality and computation speed-up when using multiple processors are demonstrated for several practical applications on multi-processor Cray Y-MP and Cray 2 supercomputers.

  18. Achieving High Performance on the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1998-01-01

    The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.

  19. Parallel algorithms for modeling flow in permeable media. Annual report, February 15, 1995 - February 14, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.A. Pope; K. Sephernoori; D.C. McKinney

    1996-03-15

    This report describes the application of distributed-memory parallel programming techniques to a compositional simulator called UTCHEM. The University of Texas Chemical Flooding reservoir simulator (UTCHEM) is a general-purpose vectorized chemical flooding simulator that models the transport of chemical species in three-dimensional, multiphase flow through permeable media. The parallel version of UTCHEM addresses solving large-scale problems by reducing the amount of time that is required to obtain the solution as well as providing a flexible and portable programming environment. In this work, the original parallel version of UTCHEM was modified and ported to CRAY T3D and CRAY T3E, distributed-memory, multiprocessor computersmore » using CRAY-PVM as the interprocessor communication library. Also, the data communication routines were modified such that the portability of the original code across different computer architectures was mad possible.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of thesemore » we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.« less

  1. Evaluation of shear bond strength of orthodontic brackets bonded with nano-filled composites.

    PubMed

    Chalipa, Javad; Akhondi, Mohammad Sadegh Ahmad; Arab, Sepideh; Kharrazifard, Mohammad Javad; Ahmadyar, Maryam

    2013-09-01

    The purpose of this study was to evaluate the shear bond strength (SBS) of orthodontic brackets bonded with two types of nano-composites in comparison to a conventional orthodontic composite. Sixty extracted human first premolars were randomly divided into 3 groups each containing 20 teeth. In group I, a conventional orthodontic composite (Transbond XT) was used to bond the brackets, while two nano-composites (Filtek TM Supreme XT and AELITE Aesthetic Enamel) were used in groups II and III respectively. The teeth were stored in distilled water at 37°C for 24 hours, thermocycled in distilled water and debonded with a universal testing machine at a crosshead speed of 1 mm/min. The adhesive remnant index (ARI) was also evaluated using a stereomicroscope. AELITE Aesthetic Enamel nano-composite revealed a SBS value of 8.44±2.09 MPa, which was higher than Transbond XT (6.91±2.13) and Filtek TM Supreme XT (6.04±2.01). Statistical analysis revealed a significant difference between groups II and III (P < 0.05). No significant difference was found between groups I and III, and between groups I and II (P > 0.05). Evaluation of ARI showed that Transbond XT left fewer adhesive remains on teeth after debonding. Results of this study indicate that the aforementioned nano-composites can be successfully used for bonding orthodontic brackets.

  2. Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model.

    PubMed

    Dean, David S; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb

    2016-09-01

    The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0,L) and then periodically repeated over the whole real line and study the power spectrum S(f) of the diffusive process x(t) in such a potential. We show that for most of realizations of x(t) in a given realization of the potential, the low-frequency behavior is S(f)∼A/f^{2}, i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L, which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x(t).

  3. Enamel shear bond strength of two orthodontic self-etching bonding systems compared to Transbond™ XT.

    PubMed

    Hellak, Andreas; Rusdea, Patrick; Schauseil, Michael; Stein, Steffen; Korbmacher-Steiner, Heike Maria

    2016-11-01

    The aim of this in vitro study was to compare the shear bond strength (SBS) and Adhesive Remnant Index (ARI) scores of two self-etching no-mix adhesives (Prompt L-Pop™ and Scotchbond™) for orthodontic appliances to the commonly used total etch system Transbond XT™ (in combination with phosphoric acid). In all, 60 human premolars were randomly divided into three groups of 20 specimens each. In group 1 (control), brackets were bonded with Transbond™ XT primer. Prompt L-Pop™ (group 2) and Scotchbond™ Universal (group 3) were used in the experimental groups. Lower premolar brackets were bonded by light curing the adhesive. After 24 h of storage, the shear bond strength (SBS) was measured using a Zwicki 1120 testing machine. The adhesive remnant index (ARI) was determined under 10× magnification. The Kruskal-Wallis test was used to statistically compare the SBS and the ARI scores. No significant differences in the SBS between any of the experimental groups were detected (group 1: 15.49 ± 3.28 MPa; group 2: 13.89 ± 4.95 MPa; group 3: 14.35 ± 3.56 MPa; p = 0.489), nor were there any significant differences in the ARI scores (p = 0.368). Using the two self-etching no-mix adhesives (Prompt L-Pop™ and Scotchbond™) for orthodontic appliances does not affect either the SBS or ARI scores in comparison with the commonly used total-etch system Transbond™ XT. In addition, Scotchbond™ Universal supports bonding on all types of surfaces (enamel, metal, composite, and porcelain) with no need for additional primers. It might therefore be helpful for simplifying bonding in orthodontic procedures.

  4. On the role of adhesion in single-file dynamics

    NASA Astrophysics Data System (ADS)

    Fouad, Ahmed M.; Noel, John A.

    2017-08-01

    For a one-dimensional interacting system of Brownian particles with hard-core interactions (a single-file model), we study the effect of adhesion on both the collective diffusion (diffusion of the entire system with respect to its center of mass) and the tracer diffusion (diffusion of the individual tagged particles). For the case with no adhesion, all properties of these particle systems that are independent of particle labeling (symmetric in all particle coordinates and velocities) are identical to those of non-interacting particles (Lebowitz and Percus, 1967). We clarify this last fact twice. First, we derive our analytical predictions that show that the probability-density functions of single-file (ρsf) and ordinary (ρord) diffusion are identical, ρsf =ρord, predicting a nonanomalous (ordinary) behavior for the collective single-file diffusion, where the average second moment with respect to the center of mass, < x(t) 2 > , is calculated from ρ for both diffusion processes. Second, for single-file diffusion, we show, both analytically and through large-scale simulations, that < x(t) 2 > grows linearly with time, confirming the nonanomalous behavior. This nonanomalous collective behavior comes in contrast to the well-known anomalous sub-diffusion behavior of the individual tagged particles (Harris, 1965). We introduce adhesion to single-file dynamics as a second inter-particle interaction rule and, interestingly, we show that adding adhesion does reduce the magnitudes of both < x(t) 2 > and the mean square displacement per particle Δx2; but the diffusion behavior remains intact independent of adhesion in both cases. Moreover, we study the dependence of both the collective diffusion constant D and the tracer diffusion constant DT on the adhesion coefficient α.

  5. The combined effect of food-simulating solutions, brushing and staining on color stability of composite resins

    PubMed Central

    Silva, Tânia Mara Da; Sales, Ana Luísa Leme Simões; Pucci, Cesar Rogerio; Borges, Alessandra Bühler; Torres, Carlos Rocha Gomes

    2017-01-01

    Abstract Objective: This study evaluated the effect of food-simulating media associated with brushing and coffee staining on color stability of different composite resins. Materials and methods: Eighty specimens were prepared for each composite: Grandio SO (Voco), Amaris (Voco), Filtek Z350XT (3M/ESPE), Filtek P90 (3M/ESPE). They were divided into four groups according to food-simulating media for 7 days: artificial saliva (control), heptane, citric acid and ethanol. The composite surface was submitted to 10,950 brushing cycles (200 g load) in an automatic toothbrushing machine. The specimens were darkened with coffee solution at 37 °C for 24 h. After each treatment, color measurements were assessed by spectrophotometry, using CIE L*a*b* system. The overall color change (ΔE) was determined for each specimen at baseline (C1) and after the treatments (food-simulating media immersion/C2, brushing/C3 and dye solution/C4). Data were analyzed by two-way repeated measures ANOVA and Tukey’s tests (p < .05). Results: The results of RM-ANOVA showed significant differences for composites (p = .001), time (p = .001) and chemical degradation (p = .002). The mean of ΔE for composites were: Z350XT (5.39)a, Amaris (3.89)b, Grandio (3.75)bc, P90 (3.36)c. According to food-simulating media: heptane (4.41)a, citric acid (4.24)a, ethanol (4.02)ab, artificial saliva (3.76)b. For the treatments: dye solution (4.53)a, brushing (4.26)a, after food-simulating media (3.52)b. Conclusions: The composite resin Filtek Z350XT showed significantly higher staining than all other composite resin tested. The immersion in heptane and citric acid produced the highest color alteration than other food-simulating media. The exposure of samples to brushing protocols and darkening in coffee solution resulted in significant color alteration of the composite resins. PMID:28642926

  6. Transcatheter aortic valve implantation transapical: step by step.

    PubMed

    Walther, Thomas; Möllmann, Helge; van Linden, Arnaud; Kempfert, Jörg

    2011-01-01

    Transcatheter aortic valve implantation (T-AVI) has been introduced into clinical practice to treat high-risk elderly patients with aortic stenosis. T-AVI can be performed by using a retrograde transfemoral (TF), transsubclavian, transaortic, and/or antegrade transapical (TA) approach. For TA-AVI, CE mark approval was granted in 2008 for the Edwards SAPIEN (Edwards Lifesciences, Irvine, CA) prosthesis with the Ascendra delivery system and in 2010 for the second-generation Edwards SAPIEN XT prosthesis and the Ascendra II delivery system, with 23-mm and 26-mm valves. In 2011, CE mark approval has been granted for TA-AVI by using the SAPIEN XT 29-mm prosthesis. Several other devices from different companies (Jenavalve, Jena Valve Inc, Munich, Germany; Embracer, Medtronic Inc, Guilford, CT; Accurate, Symetis Inc, Geneva, Switzerland) have passed "first in man trials" successfully and are being evaluated within multicenter pivotal studies. In this article we will focus on specific aspects of the TA technique for AVI. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Onboard data-processing architecture of the soft X-ray imager (SXI) on NeXT satellite

    NASA Astrophysics Data System (ADS)

    Ozaki, Masanobu; Dotani, Tadayasu; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.

    2004-09-01

    NeXT is the X-ray satellite proposed for the next Japanese space science mission. While the satellite total mass and the launching vehicle are similar to the prior satellite Astro-E2, the sensitivity is much improved; it requires all the components to be lighter and faster than previous architecture. This paper shows the data processing architecture of the X-ray CCD camera system SXI (Soft X-ray Imager), which is the top half of the WXI (Wide-band X-ray Imager) of the sensitivity in 0.2-80keV. The system is basically a variation of Astro-E2 XIS, but event extraction speed is much faster than it to fulfill the requirements coming from the large effective area and fast exposure period. At the same time, data transfer lines between components are redesigned in order to reduce the number and mass of the wire harnesses that limit the flexibility of the component distribution.

  8. Stellar Inertial Navigation Workstation

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Johnson, B.; Swaminathan, N.

    1989-01-01

    Software and hardware assembled to support specific engineering activities. Stellar Inertial Navigation Workstation (SINW) is integrated computer workstation providing systems and engineering support functions for Space Shuttle guidance and navigation-system logistics, repair, and procurement activities. Consists of personal-computer hardware, packaged software, and custom software integrated together into user-friendly, menu-driven system. Designed to operate on IBM PC XT. Applied in business and industry to develop similar workstations.

  9. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  10. Proton spectral editing in the inhomogeneous radiofrequency field of a surface coil using modified stimulated echoes.

    PubMed

    Lahrech, H; Briguet, A

    1990-11-01

    It is shown that the modified stimulated echo sequence, [theta](+/- x +/- y)-t1-[theta](+ x)-t2/2-[2 theta](+ x)-t2/2- [theta](+ x)-t1-Acq(+/- x +/- y), denoted as MSTE[2 theta]x according to the exciter phase of the 2 theta pulse, is able to perform proton spectral editing without difference spectra. On the other hand, this sequence appears to be suitable for spatial localization. Sensitivity and spatial selectivity of MSTE and conventional stimulated echo sequence (STE) are briefly compared. MSTE is applied to editing lactate in the rat brain using the locally restricted excitation of a surface coil.

  11. Multitasking 3-D forward modeling using high-order finite difference methods on the Cray X-MP/416

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terki-Hassaine, O.; Leiss, E.L.

    1988-01-01

    The CRAY X-MP/416 was used to multitask 3-D forward modeling by the high-order finite difference method. Flowtrace analysis reveals that the most expensive operation in the unitasked program is a matrix vector multiplication. The in-core and out-of-core versions of a reentrant subroutine can perform any fraction of the matrix vector multiplication independently, a pattern compatible with multitasking. The matrix vector multiplication routine can be distributed over two to four processors. The rest of the program utilizes the microtasking feature that lets the system treat independent iterations of DO-loops as subtasks to be performed by any available processor. The availability ofmore » the Solid-State Storage Device (SSD) meant the I/O wait time was virtually zero. A performance study determined a theoretical speedup, taking into account the multitasking overhead. Multitasking programs utilizing both macrotasking and microtasking features obtained actual speedups that were approximately 80% of the ideal speedup.« less

  12. Antenna pattern control using impedance surfaces

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Liu, Kefeng

    1992-01-01

    During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.

  13. INNOVATIVE TECHNOLOGY VERIFICATION REPORT XRF TECHNOLOGIES FOR MEASURING TRACE ELEMENTS IN SOIL AND SEDIMENT XCALIBUR ELVAX XRF ANALYZER

    EPA Science Inventory

    The Innov-X XT400 Series (XT400) x-ray fluorescence (XRF) analyzer was demonstrated under the U.S. Environmental Protection Agency (EPA) Superfund Innovative Technology Evaluation (SITE) Program. The field portion of the demonstration was conducted in January 2005 at the Kenned...

  14. Crosstalk-aware virtual network embedding over inter-datacenter optical networks with few-mode fibers

    NASA Astrophysics Data System (ADS)

    Huang, Haibin; Guo, Bingli; Li, Xin; Yin, Shan; Zhou, Yu; Huang, Shanguo

    2017-12-01

    Virtualization of datacenter (DC) infrastructures enables infrastructure providers (InPs) to provide novel services like virtual networks (VNs). Furthermore, optical networks have been employed to connect the metro-scale geographically distributed DCs. The synergistic virtualization of the DC infrastructures and optical networks enables the efficient VN service over inter-DC optical networks (inter-DCONs). While the capacity of the used standard single-mode fiber (SSMF) is limited by their nonlinear characteristics. Thus, mode-division multiplexing (MDM) technology based on few-mode fibers (FMFs) could be employed to increase the capacity of optical networks. Whereas, modal crosstalk (XT) introduced by optical fibers and components deployed in the MDM optical networks impacts the performance of VN embedding (VNE) over inter-DCONs with FMFs. In this paper, we propose a XT-aware VNE mechanism over inter-DCONs with FMFs. The impact of XT is considered throughout the VNE procedures. The simulation results show that the proposed XT-aware VNE can achieves better performances of blocking probability and spectrum utilization compared to conventional VNE mechanisms.

  15. An Atmospheric General Circulation Model with Chemistry for the CRAY T3E: Design, Performance Optimization and Coupling to an Ocean Model

    NASA Technical Reports Server (NTRS)

    Farrara, John D.; Drummond, Leroy A.; Mechoso, Carlos R.; Spahr, Joseph A.

    1998-01-01

    The design, implementation and performance optimization on the CRAY T3E of an atmospheric general circulation model (AGCM) which includes the transport of, and chemical reactions among, an arbitrary number of constituents is reviewed. The parallel implementation is based on a two-dimensional (longitude and latitude) data domain decomposition. Initial optimization efforts centered on minimizing the impact of substantial static and weakly-dynamic load imbalances among processors through load redistribution schemes. Recent optimization efforts have centered on single-node optimization. Strategies employed include loop unrolling, both manually and through the compiler, the use of an optimized assembler-code library for special function calls, and restructuring of parts of the code to improve data locality. Data exchanges and synchronizations involved in coupling different data-distributed models can account for a significant fraction of the running time. Therefore, the required scattering and gathering of data must be optimized. In systems such as the T3E, there is much more aggregate bandwidth in the total system than in any particular processor. This suggests a distributed design. The design and implementation of a such distributed 'Data Broker' as a means to efficiently couple the components of our climate system model is described.

  16. Evaluation of Shear Bond Strength of Orthodontic Brackets Bonded with Nano-Filled Composites

    PubMed Central

    Chalipa, Javad; Akhondi, Mohammad Sadegh Ahmad; Arab, Sepideh; Kharrazifard, Mohammad Javad; Ahmadyar, Maryam

    2013-01-01

    Objectives: The purpose of this study was to evaluate the shear bond strength (SBS) of orthodontic brackets bonded with two types of nano-composites in comparison to a conventional orthodontic composite. Materials and Methods: Sixty extracted human first premolars were randomly divided into 3 groups each containing 20 teeth. In group I, a conventional orthodontic composite (Transbond XT) was used to bond the brackets, while two nano-composites (Filtek TM Supreme XT and AELITE Aesthetic Enamel) were used in groups II and III respectively. The teeth were stored in distilled water at 37°C for 24 hours, thermocycled in distilled water and debonded with a universal testing machine at a crosshead speed of 1 mm/min. The adhesive remnant index (ARI) was also evaluated using a stereomicroscope. Results: AELITE Aesthetic Enamel nano-composite revealed a SBS value of 8.44±2.09 MPa, which was higher than Transbond XT (6.91±2.13) and Filtek TM Supreme XT (6.04±2.01). Statistical analysis revealed a significant difference between groups II and III (P < 0.05). No significant difference was found between groups I and III, and between groups I and II (P > 0.05). Evaluation of ARI showed that Transbond XT left fewer adhesive remains on teeth after debonding. Conclusion: Results of this study indicate that the aforementioned nano-composites can be successfully used for bonding orthodontic brackets. PMID:24910655

  17. Comparison of shear bond strengths of conventional orthodontic composite and nano-ceramic restorative composite: an in vitro study.

    PubMed

    Nagar, Namit; Vaz, Anna C

    2013-01-01

    To compare the shear bond strength of a nano-ceramic restorative composite Ceram-X Mono(TM♦), a restorative resin with the traditional orthodontic composite Transbond XT(TM†) and to evaluate the site of bond failure using Adhesive Remnant Index. Sixty extracted human premolars were divided into two groups of 30 each. Stainless steel brackets were bonded using Transbond XT(TM†) (Group I) and Ceram-X Mono(TM♦) (Group II) according to manufacturer's protocol. Shear bond strength was measured on Universal testing machine at crosshead speed of 1 mm/minute. Adhesive Remnant Index scores were assigned to debonded brackets of each group. Data was analyzed using unpaired 't' test and Chi square test. The mean shear bond strength of Group I (Transbond XT(TM†)) was 12.89 MPa ± 2.19 and that of Group II (Ceram-X Mono(TM)) was 7.29 MPa ± 1.76. Unpaired 't' test revealed statistically significant differences amongst the shear bond strength of the samples measured. Chi-square test revealed statistically insignificant differences amongst the ARI scores of the samples measured. Ceram-X Mono(TM♦) had a lesser mean shear bond strength when compared to Transbond XT(TM†) which was statistically significant difference. However, the mean shear bond of Ceram X Mono was within the clinically acceptable range for bonding. Ceram-X Mono(TM†) and Transbond XT(TM†) showed cohesive fracture of adhesive in 72.6% and 66.6% of the specimens, respectively.

  18. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  19. Streptococcus mutans forms xylitol-resistant biofilm on excess adhesive flash in novel ex-vivo orthodontic bracket model.

    PubMed

    Ho, Cindy S F; Ming, Yue; Foong, Kelvin W C; Rosa, Vinicius; Thuyen, Truong; Seneviratne, Chaminda J

    2017-04-01

    During orthodontic bonding procedures, excess adhesive is invariably left on the tooth surface at the interface between the bracket and the enamel junction; it is called excess adhesive flash (EAF). We comparatively evaluated the biofilm formation of Streptococcus mutans on EAF produced by 2 adhesives and examined the therapeutic efficacy of xylitol on S mutans formed on EAF. First, we investigated the biofilm formation of S mutans on 3 orthodontic bracket types: stainless steel preadjusted edgewise, ceramic preadjusted edgewise, and stainless steel self-ligating. Subsequently, tooth-colored Transbond XT (3M Unitek, Monrovia, Calif) and green Grengloo (Ormco, Glendora, Calif) adhesives were used for bonding ceramic brackets to extracted teeth. S mutans biofilms on EAF produced by the adhesives were studied using the crystal violet assay and scanning electron microscopy. Surface roughness and surface energy of the EAF were examined. The therapeutic efficacies of different concentrations of xylitol were tested on S mutans biofilms. Significantly higher biofilms were formed on the ceramic preadjusted edgewise brackets (P = 0.003). Transbond XT had significantly higher S mutans biofilms compared with Grengloo surfaces (P = 0.007). There was no significant difference in surface roughness between Transbond XT and Grengloo surfaces (P >0.05). Surface energy of Transbond XT had a considerably smaller contact angle than did Grengloo, suggesting that Transbond XT is a more hydrophilic material. Xylitol at low concentrations had no significant effect on the reduction of S mutans biofilms on orthodontic adhesives (P = 0.016). Transbond XT orthodontic adhesive resulted in more S mutans biofilm compared with Grengloo adhesive on ceramic brackets. Surface energy seemed to play a more important role than surface roughness for the formation of S mutans biofilm on EAF. Xylitol does not appear to have a therapeutic effect on mature S mutans biofilm. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  20. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  1. Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP

    NASA Technical Reports Server (NTRS)

    Gupta, V. K.; Zillmer, S. D.; Allison, R. E.

    1986-01-01

    The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.

  2. Performance measurements and operational characteristics of the Storage Tek ACS 4400 tape library with the Cray Y-MP EL

    NASA Technical Reports Server (NTRS)

    Hull, Gary; Ranade, Sanjay

    1993-01-01

    With over 5000 units sold, the Storage Tek Automated Cartridge System (ACS) 4400 tape library is currently the most popular large automated tape library. Based on 3480/90 tape technology, the library is used as the migration device ('nearline' storage) in high-performance mass storage systems. In its maximum configuration, one ACS 4400 tape library houses sixteen 3480/3490 tape drives and is capable of holding approximately 6000 cartridge tapes. The maximum storage capacity of one library using 3480 tapes is 1.2 TB and the advertised aggregate I/O rate is about 24 MB/s. This paper reports on an extensive set of tests designed to accurately assess the performance capabilities and operational characteristics of one STK ACS 4400 tape library holding approximately 5200 cartridge tapes and configured with eight 3480 tape drives. A Cray Y-MP EL2-256 was configured as its host machine. More than 40,000 tape jobs were run in a variety of conditions to gather data in the areas of channel speed characteristics, robotics motion, time taped mounts, and timed tape reads and writes.

  3. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of... these format requirements: (1) Computer Compatibility: IBM PC/XT/AT or Apple Macintosh; (2) Operating...

  4. The USL NASA PC R and D project: General specifications of objectives

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor)

    1984-01-01

    Given here are the general specifications of the objectives of the University of Southwestern Louisiana Data Base Management System (USL/DBMS) NASA PC R and D Project, a project initiated to address future R and D issues related to PC-based processing environments acquired pursuant to the NASA contract work; namely, the IBM PC/XT systems.

  5. Testudinibacter aquarius gen. nov., sp. nov., a member of the family Pasteurellaceae isolated from the oral cavity of freshwater turtles.

    PubMed

    Hansen, Mie Johanne; Pennanen, Elin Anna Erica; Bojesen, Anders Miki; Christensen, Henrik; Bertelsen, Mads Frost

    2016-02-01

    A total of 13 Pasteurellaceae isolates from healthy freshwater turtles were characterized by genotypic and phenotypic tests. Phylogenetic analysis of partial 16S rRNA and rpoB gene sequences showed that the isolates investigated formed a monophyletic group. The closest related species based on 16S rRNA gene sequencing was Chelonobacter oris CCUG 55632T with 94.4 % similarity and the closest related species based on rpoB gene sequence comparison was [Pasteurella] testudinis CCUG 19802T with 91.5 % similarity. All the investigated isolates exhibited phenotypic characteristics of the family Pasteurellaceae. However, they could be separated from existing genera of the Pasteurellaceae by the following test results: indole, ornithine decarboxylase and Voges-Proskauer positive; and methyl red, urease and PNPG (α-glucosidase) negative. No X- or V-factor requirement was observed. A zone of β-haemolysis surrounded the colonies after 24 h of incubation on bovine blood agar at 37 °C. Acid was produced from l-arabinose, dulcitol, d-mannitol, sucrose and trehalose. Representative strain ELNT2xT had a fatty acid profile that was characteristic for members of the Pasteurellaceae. ELNT2xT expressed only one respiratory quinone, ubiquinone-8 (100 %). The DNA G+C content of strain ELNT2xT was 42.8 mol%. On the basis of both phylogenetic and phenotypic evidence, it is proposed that the strains should be classified as representatives of a novel species of a new genus, Testudinibacter aquarius gen. nov., sp. nov. The type strain of Testudinibacter aquarius is ELNT2xT ( = CCUG 65146T = DSM 28140T), which was isolated from the oral cavity of a captive eastern long-necked turtle (Chelodina longicollis) in Denmark in 2012.

  6. Dietary supplementation of young broiler chickens with Capsicum and turmeric oleoresins increases resistance to necrotic enteritis

    USDA-ARS?s Scientific Manuscript database

    The Clostridium-related poultry disease, necrotic enteritis (NE), causes substantial economic losses on a global scale. In this study, a mixture of two plant-derived phytonutrients, Capsicum oleoresin and turmeric oleoresin (XT), was evaluated for its effects on local and systemic immune responses ...

  7. Dietary supplementation of young broiler chickens with capsicum and turmeric oleoresin increases resistance to necrotic enteris

    USDA-ARS?s Scientific Manuscript database

    The Clostridium-related poultry disease, necrotic enteritis (NE), causes substantial economic losses on a global scale. In this study, a mixture of two plant-derived phytonutrients, Capsicum oleoresin and turmeric oleoresin (XT), was evaluated for its effects on local and systemic immune responses ...

  8. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haynes, R.A.

    The Network File System (NFS) is used in UNIX-based networks to provide transparent file sharing between heterogeneous systems. Although NFS is well-known for being weak in security, it is widely used and has become a de facto standard. This paper examines the user authentication shortcomings of NFS and the approach Sandia National Laboratories has taken to strengthen it with Kerberos. The implementation on a Cray Y-MP8/864 running UNICOS is described and resource/performance issues are discussed. 4 refs., 4 figs.

  10. Parallel processing a three-dimensional free-lagrange code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1989-01-01

    A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten versionmore » of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.« less

  11. Parallel processing a real code: A case history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1988-01-01

    A three-dimensional, time-dependent Free-Lagrange hydrodynamics code has been multitasked and autotasked on a Cray X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the Cray multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The 3-D algorithm has presented a number of problems that simpler algorithms, such as 1-D hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a Cray 1, to a multitasking code are discussed, Autotasking of a rewritten version ofmore » the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given. 8 refs., 13 figs.« less

  12. A Pacific Ocean general circulation model for satellite data assimilation

    NASA Technical Reports Server (NTRS)

    Chao, Y.; Halpern, D.; Mechoso, C. R.

    1991-01-01

    A tropical Pacific Ocean General Circulation Model (OGCM) to be used in satellite data assimilation studies is described. The transfer of the OGCM from a CYBER-205 at NOAA's Geophysical Fluid Dynamics Laboratory to a CRAY-2 at NASA's Ames Research Center is documented. Two 3-year model integrations from identical initial conditions but performed on those two computers are compared. The model simulations are very similar to each other, as expected, but the simulations performed with the higher-precision CRAY-2 is smoother than that with the lower-precision CYBER-205. The CYBER-205 and CRAY-2 use 32 and 64-bit mantissa arithmetic, respectively. The major features of the oceanic circulation in the tropical Pacific, namely the North Equatorial Current, the North Equatorial Countercurrent, the South Equatorial Current, and the Equatorial Undercurrent, are realistically produced and their seasonal cycles are described. The OGCM provides a powerful tool for study of tropical oceans and for the assimilation of satellite altimetry data.

  13. 75 FR 17434 - In the Matter of Certain Personal Data and Mobile Communications Devices and Related Software...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-06

    ... Mobile Communications Devices and Related Software; Notice of Investigation AGENCY: U.S. International... Apple Computer, Inc. of Cupertino, California and NeXT Software, Inc. f/k/a NeXT Computer, Inc. of... certain personal data and mobile communications devices and related software by reason of infringement of...

  14. One-year multicentre outcomes of transapical aortic valve implantation using the SAPIEN XT™ valve: the PREVAIL transapical study.

    PubMed

    Walther, Thomas; Thielmann, Matthias; Kempfert, Joerg; Schroefel, Holger; Wimmer-Greinecker, Gerhard; Treede, Hendrik; Wahlers, Thorsten; Wendler, Olaf

    2013-05-01

    The study aimed to evaluate 1-year outcomes of the multicentre PREVAIL transapical (TA) study of TA-aortic valve implantation (AVI) in high-risk patients. From September 2009 to August 2010, a total of 150 patients, aged 81.6 ± 5.8 years, 40.7% female, were included at 12 European TA-AVI experienced sites. Patients received 23 (n = 36), 26 (n = 57) and 29 mm (n = 57) second-generation SAPIEN XT™ (Edwards Lifesciences, Irvine, CA, USA) valves. The mean logistic EuroSCORE was 24.3 ± 7.0, and mean Society Thoracic Surgeons score was 7.5 ± 4.4%. Survival was 91.3% at 30 days and 77.9% at 1 year. Subgroup analysis revealed survivals of 91.7/88.9, 86.0/70.2, 96.55/91.2% for patients receiving 23-, 26- and 29-mm valves at 30 days and at 1 year, respectively. Transthoracic echocardiography revealed preserved left ventricular ejection fraction and low gradients. Aortic incompetence was none in 41/48, trace 30/36, mild 22/12 and moderate in 7/4% at discharge and 1 year. Walking distance increased from 221 (postimplant) to 284 m (at 1 year, P = 0.0004). Three patients required reoperation due to increasing aortic incompetence during follow-up. Causes of mortality at 1 year were cardiac (n = 7), stroke (n = 1) and others (n = 5). The European PREVAIL multicentre trial demonstrates good functionality and good outcomes for TA-AVI using the second-generation SAPIEN XT prosthesis and the ASCENDRA-II delivery system. The 29-mm SAPIEN XT valve was successfully introduced and showed excellent results.

  15. Effect of different surface treatments on the shear bond strength of nanofilled composite repairs

    PubMed Central

    Ahmadizenouz, Ghazaleh; Esmaeili, Behnaz; Taghvaei, Arnica; Jamali, Zahra; Jafari, Toloo; Amiri Daneshvar, Farshid; Khafri, Soraya

    2016-01-01

    Background. Repairing aged composite resin is a challenging process. Many surface treatment options have been proposed to this end. This study evaluated the effect of different surface treatments on the shear bond strength (SBS) of nano-filled composite resin repairs. Methods. Seventy-five cylindrical specimens of a Filtek Z350XT composite resin were fabricated and stored in 37°C distilled water for 24 hours. After thermocycling, the specimens were divided into 5 groups according to the following surface treatments: no treatment (group 1); air abrasion with 50-μm aluminum oxide particles (group 2); irradiation with Er:YAG laser beams (group 3); roughening with coarse-grit diamond bur + 35% phosphoric acid (group 4); and etching with 9% hydrofluoric acid for 120 s (group 5). Another group of Filtek Z350XT composite resin samples (4×6 mm) was fabricated for the measurement of cohesive strength (group 6). A silane coupling agent and an adhesive system were applied after each surface treatment. The specimens were restored with the same composite resin and thermocycled again. A shearing force was applied to the interface in a universal testing machine. Data were analyzed using one-way ANOVA and post hoc Tukey tests (P < 0.05). Results. One-way ANOVA indicated significant differences between the groups (P < 0.05). SBS of controls was significantly lower than the other groups; differences between groups 2, 3, 4, 5 and 6 were not significant. Surface treatment with diamond bur + 35% phosphoric acid resulted in the highest bond strength. Conclusion. All the surface treatments used in this study improved the shear bond strength of nanofilled composite resin used. PMID:27092209

  16. Performance evaluation of the Abbott CELL-DYN Emerald for use as a bench-top analyzer in a research setting.

    PubMed

    Khoo, T-L; Xiros, N; Guan, F; Orellana, D; Holst, J; Joshua, D E; Rasko, J E J

    2013-08-01

    The CELL-DYN Emerald is a compact bench-top hematology analyzer that can be used for a three-part white cell differential analysis. To determine its utility for analysis of human and mouse samples, we evaluated this machine against the larger CELL-DYN Sapphire and Sysmex XT2000iV hematology analyzers. 120 human (normal and abnormal) and 30 mouse (normal and abnormal) samples were analyzed on both the CELL-DYN Emerald and CELL-DYN Sapphire or Sysmex XT2000iV analyzers. For mouse samples, the CELL-DYN Emerald analyzer required manual recalibration based on the histogram populations. Analysis of the CELL-DYN Emerald showed excellent precision, within accepted ranges (white cell count CV% = 2.09%; hemoglobin CV% = 1.68%; platelets CV% = 4.13%). Linearity was excellent (R² ≥ 0.99), carryover was minimal (<1%), and overall interinstrument agreement was acceptable for both human and mouse samples. Comparison between the CELL-DYN Emerald and Sapphire analyzers for human samples or Sysmex XT2000iV analyzer for mouse samples showed excellent correlation for all parameters. The CELL-DYN Emerald was generally comparable to the larger reference analyzer for both human and mouse samples. It would be suitable for use in satellite research laboratories or as a backup system in larger laboratories. © 2012 John Wiley & Sons Ltd.

  17. A comparison between the shear bond strength of brackets bonded to glazed and deglazed porcelain surfaces with resin-reinforced glass-ionomer cement and a bis-GMA resin adhesive.

    PubMed

    Lifshitz, Abraham B; Cárdenas, Marianela

    2006-01-01

    This study compared the shear bond strength of a light-cure resin-reinforced glass-ionomer cement with a bis-GMA light-cure resin system in the bonding of stainless steel brackets to glazed and deglazed porcelain surfaces. Porcelain surfaces were divided into 4 groups: group 1, deglazed porcelain surfaces with Transbond XT, group 2, glazed porcelain surfaces with Transbond XT; group 3, deglazed porcelain surfaces with Fuji Ortho LC; and group 4, porcelain surfaces with Fuji Ortho LC. Microetching with 50-microm aluminum oxide for 2 seconds at a distance of 5 mm deglazed the porcelain surfaces in groups 1 and 3. All brackets were bonded to the porcelain surfaces using the same procedure and light-cured for 40 seconds with a visible light. All samples were thermocycled between 5 degrees C and 55 degrees C for 300 cycles before testing for shear bond strength with a universal testing machine. The analysis of variance showed no significant difference (P < .05) among the 4 groups; ie, group 1, 10.12 MPa; group 2, 7.00 MPa; group 3, 6.78 MPa; and group 4, 11.15 MPa. The F test also failed to demonstrate any statistical difference among the groups. Conditioning the porcelain surfaces with 37% phosphoric acid immediately followed by a nonhydrolyzed silane coupling agent resulted in clinically adequate bond strength when using either a composite resin or a resin-reinforced glass-ionomer cement. Microetching of these porcelain surfaces apparently offers no bonding advantage.

  18. The Effect of a Combination of Implant Controller and Handpiece from Different Manufacturers on the Torque Value.

    PubMed

    Lee, Du-Hyeong; Kim, Yong-Gun; Lee, Jong-Ho; Hong, Sam-Pyo; Lim, Young-Jun; Lee, Kyu-Bok

    2015-01-01

    To determine the accuracy of applied torque of different implant controller and handpiece combinations by using an electronic torque gauge. Four combinations of the following devices were tested: Surgic XT controller (NSK), XIP10 controller (Saeshin), X-SG20L handpiece (NSK), CRB26LX handpiece (Saeshin). For five torque settings, 30 measurements were recorded at 30 revolutions per minute by using an electronic torque gauge fixed to jigs, and means were calculated. Applied torques were generally higher than the set torque of 10 and 20 Ncm and lower than the set values of 40 and 50 Ncm. The average torque deviations differed significantly among the combinations (P < .05). At 10 and 20 Ncm, the Surgic XT/X-SG20L combination yielded the closest value to the intended torque, followed by the XIP10/X-SG20L combination. At 30 Ncm, the XIP10/X-SG20L combination showed the nearest value. At 40 Ncm, the Surgic XT/X-SG20L, XIP10/CRB26LX, and XIP10/X-SG20L combinations showed deviations within 10%. At 50 Ncm, all the combinations showed lower applied torque than the set value. Large standard deviations were observed in the Surgic XT/CRB26LX (13.288) and Surgic XT/X-SG20L (7.858) combinations. Different combinations of implant controllers and handpieces do not generate significant variations in applied torque. The actual torque varies according to the torque setting. It is necessary to calibrate devices before use to reduce potentially problematic torque.

  19. [Effects of surface treatment and adhesive application on shear bond strength between zirconia and enamel].

    PubMed

    Li, Yinghui; Wu, Buling; Sun, Fengyang

    2013-03-01

    To evaluate the effects of sandblasting and different orthodontic adhesives on shear bond strength between zirconia and enamel. Zirconia ceramic samples were designed and manufactured for 40 extracted human maxillary first premolars with CAD/CAM system. The samples were randomized into 4 groups for surface treatment with sandblasting and non-treated with adhesives of 3M Transbond XT or Jingjin dental enamel bonding resin. After 24 h of bonded fixation, the shear bond strengths were measured by universal mechanical testing machine and analyzed with factorial variance analysis. The shear bond strength was significantly higher in sandblasting group than in untreated group (P<0.05) and comparable between the two groups with the adhesives of Transbond XT and dental enamel bonding resin (P>0.05). The shear bond strength between zirconia and enamel is sufficient after sandblasting regardless of the application of either adhesive.

  20. Bisphenol A release from orthodontic adhesives measured in vitro and in vivo with gas chromatography.

    PubMed

    Moreira, Marília Rodrigues; Matos, Leonardo Gontijo; de Souza, Israel Donizeti; Brigante, Tamires Amabile Valim; Queiroz, Maria Eugênia Costa; Romano, Fábio Lourenço; Nelson-Filho, Paulo; Matsumoto, Mírian Aiko Nakane

    2017-03-01

    The objectives of this study were to quantify in vitro the Bisphenol A (BPA) release from 5 orthodontic composites and to assess in vivo the BPA level in patients' saliva and urine after bracket bonding with an orthodontic adhesive system. For the in-vitro portion of this study, 5 orthodontic composites were evaluated: Eagle Spectrum (American Orthodontics, Sheboygan, Wis), Enlight (Ormco, Orange, Calif), Light Bond (Reliance Orthodontic Products, Itasca, Ill), Mono Lok II (Rocky Mountain Orthodontics, Denver, Colo), and Transbond XT (3M Unitek, Monrovia, Calif). Simulating intraoral conditions, the specimens were immersed in a water/ethanol solution, and the BPA (ng.g -1 ) liberation was measured after 30 minutes, 24 hours, 1 day, 1 week, and 1 month by the gas chromatography system coupled with mass spectrometry. Twenty patients indicated for fixed orthodontic treatment participated in the in-vivo study. Saliva samples were collected before bracket bonding and then 30 minutes, 24 hours, 1 day, 1 week, and 1 month after bonding the brackets. Urine samples were collected before bonding and then at 1 day, 1 week, and 1 month after bonding. The results were analyzed statistically using analysis of variance and Tukey posttest, with a significance level of 5%. All composites evaluated in vitro released small amounts of BPA. Enlight composite showed the greatest release, at 1 month. Regarding the in-vivo study, the mean BPA level in saliva increased significantly only at 30 minutes after bonding in comparison with measurements recorded before bonding. All orthodontic composites released BPA in vitro. Enlight and Light Bond had, respectively, the highest and lowest BPA releases in vitro. The in-vivo experiment showed that bracket bonding with the Transbond XT orthodontic adhesive system resulted in increased BPA levels in saliva and urine. The levels were significant but still lower than the reference dose for daily ingestion. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  1. The cascade high productivity language

    NASA Technical Reports Server (NTRS)

    Callahan, David; Chamberlain, Branford L.; Zima, Hans P.

    2004-01-01

    This paper describes the design of Chapel, the Cascade High Productivity Language, which is being developed in the DARPA-funded HPCS project Cascade led by Cray Inc. Chapel pushes the state-of-the-art in languages for HEC system programming by focusing on productivity, in particular by combining the goal of highest possible object code performance with that of programmability offered by a high-level user interface.

  2. Spectrophotometric Evaluation of Colour Stability of Nano Hybrid Composite Resin in Commonly Used Food Colourants in Asian Countries.

    PubMed

    Chittem, Jyothi; Sajjan, Girija S; Varma Kanumuri, Madhu

    2017-01-01

    There is growing interest in colour stability of aesthetic restorations. So far few studies have been reported. This study was designed to investigate the effects of different common food colourants i.e., Turmeric and Carmoisine (orange red dye) consumed by patients in Asian countries on a recent nano hybrid composite resin. A total of sixty disk shaped specimens measuring 10 mm in diameter and 2 mm in thickness were prepared. The samples were divided into two groups {Z 100 (Dental restorative composite) Filtek Z 250 XT (Nano hybrid universal restorative)}. Baseline colour measurement of all specimens were made using reflectance spectrophotometer with CIE L*a*b* system. Specimens were immersed in artificial saliva and different experimental solutions containing food colourants (carmoisine solution and turmeric solution) for three hours per day at 37°C. Colour measurements were made after 15 days. Colour difference (ΔE*) was calculated. Mean values were compared by one-way analysis of variance (ANOVA). Multiple range test by Tukey Post-hoc test procedure was employed to identify the significant groups at 5% level. Z 100 showed minimum staining capacity when compared to Z 250 XT in both the colourant solutions. The nanohybrid composite resin containing TEGDMA showed significant colour change when compared to that of microhybrid composite resin as a result of staining in turmeric and carmoisine solution.

  3. Structural transition in Mg-doped LiMn 2O 4: a comparison with other M-doped Li-Mn spinels

    NASA Astrophysics Data System (ADS)

    Capsoni, Doretta; Bini, Marcella; Chiodelli, Gaetano; Massarotti, Vincenzo; Mozzati, Maria Cristina; Azzoni, Carlo B.

    2003-01-01

    The charge distribution in the Mg-doped lithium manganese spinel Li 1.02Mg xMn 1.98- xO 4 with 0.00< x≤0.20 is discussed and compared to those pertinent to other M-doped samples (M=Ni 2+, Co 3+, Cr 3+, Al 3+ and Ga 3+). EPR spectra, low temperature X-ray diffraction and conductivity data are related to the cooperative Jahn-Teller (J-T) transition occurring at about 280 K in the undoped sample. The sensitivity of the cationic sublattice in displaying electronic and magnetic changes after substitution is remarked. The inhibition of the J-T transition is related to the ratio r=|Mn 4+|/|Mn 3+| as deduced from the charge distribution model [Li 1- xt+Mg xt2+] tetr[Li y+ xt+Mg xo2+Mn 1-3 y-2 x3+Mn 1+2 y+ x4+] octa where x= xo+ xt. For y=0.02 and x=0.02, a value r=1.177 is obtained, very close to rlim=1.18, the limit value beyond which the transition is inhibited.

  4. ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM

    NASA Technical Reports Server (NTRS)

    Hibbard, E. A.

    1994-01-01

    Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver); and an SGI IRIS 4D running IRIX (no native device driver). Currently with version 7.0 of ARCGRAPH, the VDI library supports the following output devices: A VT100 terminal with a RETRO-GRAPHICS board installed, a VT240 using the Tektronix 4010 emulation capability, an SGI IRIS turbo using the native GL2 library, a Tektronix 4010, a Tektronix 4105, and the Tektronix 4014. ARCGRAPH version 7.0 was developed in 1988.

  5. IBM PC/IX operating system evaluation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.

  6. A 4-year clinical evaluation of direct composite build-ups for space closure after orthodontic treatment.

    PubMed

    Demirci, Mustafa; Tuncer, Safa; Öztaş, Evren; Tekçe, Neslihan; Uysal, Ömer

    2015-12-01

    To evaluate the medium-term clinical performance of direct composite build-ups for diastema closures and teeth recontouring using a nano and a nanohybrid composite in combination with three- or two-step etch-and-rinse adhesives following treatment with fixed orthodontic appliances. A total of 30 patients (mean age, 19.5 years) received 147 direct composite additions for teeth recontouring and diastema closures. A nano and a nanohybrid composite (Filtek Supreme XT and CeramX Duo) were bonded to tooth structure by using a three-step (Scotchbond Multipurpose) or a two-step (XP Bond) etch and rinse adhesive. Ten out of 147 composite build-ups (composite addition) constituted tooth recontouring cases, and the remaining 137 constituted diastema closure cases. The restorations were evaluated by two experienced, calibrated examiners according to modified Ryge criteria at the following time intervals: baseline, 1, 2, 3, and 4 years. The 4-year survival rates were 92.8 % for Filtek Supreme XT/Scotchbond Multi-Purpose Plus and 93 % for CeramX Duo/XP Bond. Only ten restorations failed (5 Filtek Supreme XT and 5 CeramX Duo). Statistical analysis revealed no significant differences between the two composite-adhesive combinations with respect to color match, marginal discoloration, wear/loss of anatomical form, caries formation, marginal adaptation, and surface texture on comparing the five time periods (baseline, 1, 2, 3, and 4 years) The 4-year survival rates in the present study were favorable. The restorations exhibited excellent scores with regard to color match, marginal adaptation, surface texture, marginal discoloration, wear/loss of anatomical form, and caries formation, after 4 years of clinical evaluation. Clinical relevance An alternative clinical approach for correcting discrepancies in tooth size and form, such as performing direct composite restorations following fixed orthodontic treatment, may be an excellent and minimally invasive treatment.

  7. Accuracy Progressive Calculation of Lagrangian Trajectories from Gridded Velocity Field

    DTIC Science & Technology

    2014-01-01

    traced (Vries and Doos 2001). The two types of velocity are convertible. Routine ocean data 36 assimilation systems (Galanis et al. 2006; Lozano et...1b) 54 The position of each fluid particle, R(t) = [x(t), y(t), z(t)], is specified in the Lagrangian system . 55 The connection...coordinate 233 system at the southwest corner (Fig. 6). The x*- and y*- axes point eastward and northward, 234 respectively. Here, the superscript

  8. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  9. NAS (Numerical Aerodynamic Simulation Program) technical summaries, March 1989 - February 1990

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Given here are selected scientific results from the Numerical Aerodynamic Simulation (NAS) Program's third year of operation. During this year, the scientific community was given access to a Cray-2 and a Cray Y-MP supercomputer. Topics covered include flow field analysis of fighter wing configurations, large-scale ocean modeling, the Space Shuttle flow field, advanced computational fluid dynamics (CFD) codes for rotary-wing airloads and performance prediction, turbulence modeling of separated flows, airloads and acoustics of rotorcraft, vortex-induced nonlinearities on submarines, and standing oblique detonation waves.

  10. The neXtProt peptide uniqueness checker: a tool for the proteomics community.

    PubMed

    Schaeffer, Mathieu; Gateau, Alain; Teixeira, Daniel; Michel, Pierre-André; Zahn-Zabal, Monique; Lane, Lydie

    2017-11-01

    The neXtProt peptide uniqueness checker allows scientists to define which peptides can be used to validate the existence of human proteins, i.e. map uniquely versus multiply to human protein sequences taking into account isobaric substitutions, alternative splicing and single amino acid variants. The pepx program is available at https://github.com/calipho-sib/pepx and can be launched from the command line or through a cgi web interface. Indexing requires a sequence file in FASTA format. The peptide uniqueness checker tool is freely available on the web at https://www.nextprot.org/tools/peptide-uniqueness-checker and from the neXtProt API at https://api.nextprot.org/. lydie.lane@sib.swiss. © The Author(s) 2017. Published by Oxford University Press.

  11. High-Performance Design Patterns for Modern Fortran

    DOE PAGES

    Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...

    2015-01-01

    This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less

  12. Quantifying phase synchronization using instances of Hilbert phase slips

    NASA Astrophysics Data System (ADS)

    Govindan, R. B.

    2018-07-01

    We propose to quantify phase synchronization between two signals, x(t) and y(t), by calculating variance in the Hilbert phase of y(t) at instances of phase slips exhibited by x(t). The proposed approach is tested on numerically simulated coupled chaotic Roessler systems and second order autoregressive processes. Furthermore we compare the performance of the proposed and original approaches using uterine electromyogram signals and show that both approaches yield consistent results A standard phase synchronization approach, which involves unwrapping the Hilbert phases (ϕ1(t) and ϕ2(t)) of the two signals and analyzing the variance in the | n ṡϕ1(t) - m ṡϕ2(t) | , mod 2 π, (n and m are integers), was used for comparison. The synchronization indexes obtained from the proposed approach and the standard approach agree reasonably well in all of the systems studied in this work. Our results indicate that the proposed approach, unlike the traditional approach, does not require the non-invertible transformations - unwrapping of the phases and calculation of mod 2 π and it can be used to reliably to quantify phase synchrony between two signals.

  13. GPU acceleration of a petascale application for turbulent mixing at high Schmidt number using OpenMP 4.5

    NASA Astrophysics Data System (ADS)

    Clay, M. P.; Buaria, D.; Yeung, P. K.; Gotoh, T.

    2018-07-01

    This paper reports on the successful implementation of a massively parallel GPU-accelerated algorithm for the direct numerical simulation of turbulent mixing at high Schmidt number. The work stems from a recent development (Comput. Phys. Commun., vol. 219, 2017, 313-328), in which a low-communication algorithm was shown to attain high degrees of scalability on the Cray XE6 architecture when overlapping communication and computation via dedicated communication threads. An even higher level of performance has now been achieved using OpenMP 4.5 on the Cray XK7 architecture, where on each node the 16 integer cores of an AMD Interlagos processor share a single Nvidia K20X GPU accelerator. In the new algorithm, data movements are minimized by performing virtually all of the intensive scalar field computations in the form of combined compact finite difference (CCD) operations on the GPUs. A memory layout in departure from usual practices is found to provide much better performance for a specific kernel required to apply the CCD scheme. Asynchronous execution enabled by adding the OpenMP 4.5 NOWAIT clause to TARGET constructs improves scalability when used to overlap computation on the GPUs with computation and communication on the CPUs. On the 27-petaflops supercomputer Titan at Oak Ridge National Laboratory, USA, a GPU-to-CPU speedup factor of approximately 5 is consistently observed at the largest problem size of 81923 grid points for the scalar field computed with 8192 XK7 nodes.

  14. The SGI/CRAY T3E: Experiences and Insights

    NASA Technical Reports Server (NTRS)

    Bernard, Lisa Hamet

    1999-01-01

    The focus of the HPCC Earth and Space Sciences (ESS) Project is capability computing - pushing highly scalable computing testbeds to their performance limits. The drivers of this focus are the Grand Challenge problems in Earth and space science: those that could not be addressed in a capacity computing environment where large jobs must continually compete for resources. These Grand Challenge codes require a high degree of communication, large memory, and very large I/O (throughout the duration of the processing, not just in loading initial conditions and saving final results). This set of parameters led to the selection of an SGI/Cray T3E as the current ESS Computing Testbed. The T3E at the Goddard Space Flight Center is a unique computational resource within NASA. As such, it must be managed to effectively support the diverse research efforts across the NASA research community yet still enable the ESS Grand Challenge Investigator teams to achieve their performance milestones, for which the system was intended. To date, all Grand Challenge Investigator teams have achieved the 10 GFLOPS milestone, eight of nine have achieved the 50 GFLOPS milestone, and three have achieved the 100 GFLOPS milestone. In addition, many technical papers have been published highlighting results achieved on the NASA T3E, including some at this Workshop. The successes enabled by the NASA T3E computing environment are best illustrated by the 512 PE upgrade funded by the NASA Earth Science Enterprise earlier this year. Never before has an HPCC computing testbed been so well received by the general NASA science community that it was deemed critical to the success of a core NASA science effort. NASA looks forward to many more success stories before the conclusion of the NASA-SGI/Cray cooperative agreement in June 1999.

  15. Comparison of microleakage in Class II cavities restored with silorane-based and methacrylate-based composite resins using different restorative techniques over time.

    PubMed

    Khosravi, Kazem; Mousavinasab, Seyed-Mostafa; Samani, Mahsa Sahraneshin

    2015-01-01

    Despite the growing tendency toward tooth-colored restorations in dentistry, polymerization shrinkage and subsequent marginal microleakage remains a problem. The aim of this in vitro study was to compare microleakage between silorane-based and methacrylate-based composite resins at different time intervals and with different restorative techniques. In this in vitro study, 108 sound extracted human molar teeth were used. Mesial and distal proximal class II boxes with dimensions of 1.5 mm depth and 4 mm width were prepared. The gingival margins of all cavities were 1 mm below the cement enamel junction. The teeth were randomly divided into three groups based on test materials. In the first group, the teeth were restored by a nanocomposite (Filtek Z350XT, 3MESPE) and SE Bond adhesive (Kuraray, Japan), in the second group, the teeth were restored with a silorane-based (Filtek P90, 3MESPE) and Filtek P90 Adhesive (3M ESPE, USA) and in the third group, the teeth were restored with a microhybrid posterior composite resin (Filtek P60, 3MESPE) and SE Bond adhesive (Kuraray, Japan). Half of the proximal cavities in each of these three groups were restored in two horizontal layers and the other half in four horizontal layers. After a period of aging (24-h, 3-month and 6-month) in water and then application of 500 thermal cycles, the teeth were immersed for 24-h in 0.5% fuchsin and evaluated under a stereomicroscope at ×36 magnification to evaluate leakage in gingival margin. Data was statistically analyzed using Kruskal-Wallis and Mann-Whitney U-tests. P ≤ 0.05 was considered as significant. In Z350XT statistically significant differences were observed in microleakage in comparison of 24-h and 6-month intervals (P = 0.01) that was higher in 6-month. Comparison of microleakage in P90 and P60 composite resins was also statistically significant and was less in P90. Microleakage was not significantly different between P90 and Z350XT at 24-h. However, this difference was significant at 3-month and 6-month intervals. Differences in microleakage of P60 and Z-350XT composite resins were not statistically significant in all intervals (P = 0.38). P90 showed the lowest microleakage during storage in water. Z350XT had microleakage similar to P90 within 24-h, but after 6-month of storage in water, it showed the highest microleakage among all the groups. The number of layers (2 layers vs. 4 layers) did not result in any differences in microleakage scores of the composite resins (P = 0.42). Water storage times did not result in any significant effect on microleakage of P90 and P60.

  16. Apple founder targets healthcare as NeXT market. Interview by Carolyn Dunbar and Michael L. Laughlin.

    PubMed

    Jobs, S

    1992-12-01

    Cofounder and former chairman of the board of Apple Computer Steven Jobs looks beyond the 1980s image of a petulant, embittered young man, fighting with all who failed to share his vision, and many who did. Today, as a founder, president and chairman of NeXT, Inc., he looks to more high-minded applications of his computer genius.

  17. Impact of rheology on probabilistic forecasts of sea ice trajectories: application for search and rescue operations in the Arctic

    NASA Astrophysics Data System (ADS)

    Rabatel, Matthias; Rampal, Pierre; Carrassi, Alberto; Bertino, Laurent; Jones, Christopher K. R. T.

    2018-03-01

    We present a sensitivity analysis and discuss the probabilistic forecast capabilities of the novel sea ice model neXtSIM used in hindcast mode. The study pertains to the response of the model to the uncertainty on winds using probabilistic forecasts of ice trajectories. neXtSIM is a continuous Lagrangian numerical model that uses an elasto-brittle rheology to simulate the ice response to external forces. The sensitivity analysis is based on a Monte Carlo sampling of 12 members. The response of the model to the uncertainties is evaluated in terms of simulated ice drift distances from their initial positions, and from the mean position of the ensemble, over the mid-term forecast horizon of 10 days. The simulated ice drift is decomposed into advective and diffusive parts that are characterised separately both spatially and temporally and compared to what is obtained with a free-drift model, that is, when the ice rheology does not play any role in the modelled physics of the ice. The seasonal variability of the model sensitivity is presented and shows the role of the ice compactness and rheology in the ice drift response at both local and regional scales in the Arctic. Indeed, the ice drift simulated by neXtSIM in summer is close to the one obtained with the free-drift model, while the more compact and solid ice pack shows a significantly different mechanical and drift behaviour in winter. For the winter period analysed in this study, we also show that, in contrast to the free-drift model, neXtSIM reproduces the sea ice Lagrangian diffusion regimes as found from observed trajectories. The forecast capability of neXtSIM is also evaluated using a large set of real buoy's trajectories and compared to the capability of the free-drift model. We found that neXtSIM performs significantly better in simulating sea ice drift, both in terms of forecast error and as a tool to assist search and rescue operations, although the sources of uncertainties assumed for the present experiment are not sufficient for complete coverage of the observed IABP positions.

  18. Shear bond strength of different adhesives tested in accordance with DIN 13990-1/-2 and using various methods of enamel conditioning.

    PubMed

    Richter, C; Jost-Brinkmann, P-G

    2015-03-01

    The purpose of this work was to analyze the shear bond strength (SBS) of different adhesives for orthodontic brackets in accordance with DIN 13990-1/-2, also taking into consideration potential effects arising from different scenarios of enamel conditioning and specimen storage. A total of 390 experiments were performed, with groups of 10 specimens subjected to identical treatments. Three adhesives were tested: Transbond™ XT (3M Unitek, Monrovia, USA), Beauty Ortho Bond (Shofu, Kyoto, Japan), and Fuji Ortho LC (GC Europe, Leuven, Belgium). SBS was evaluated separately at the bracket-adhesive and adhesive-enamel interfaces, as well as the total (enamel-adhesive-bracket) interface. The brackets were metal brackets for upper right central incisors (Discovery® from Dentaurum, Ispringen, Germany). A universal testing machine (Zwick Z010, Ulm, Germany) was used for testing the SBS after 15 min, or after storage in distilled water at 37 °C for 24 h, or after 24 h followed by 500 thermocycles alternating between 5 and 55 °C. Transbond™ XT produced the highest levels of SBS. The least favorable performance was observed with Fuji Ortho LC after enamel conditioning with 10 % polyacrylic acid. Thermocycling did not have a significant influence. Transbond™ XT and Beauty Ortho Bond (but not Fuji Ortho LC) yielded levels of SBS adequate for clinical application (≥ 7 MPa).

  19. System and method for constructing filters for detecting signals whose frequency content varies with time

    DOEpatents

    Qian, S.; Dunham, M.E.

    1996-11-12

    A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.

  20. History of Canaveral District: 1950 - 1971

    DTIC Science & Technology

    1971-07-01

    world, the Canaveral sites had from the beginning required special precautions to alienate corrosion of exposed items. Special finishes were required...SHA’d rtr,. 111 E.xt 240 PAGE 5 UlI!IIIISTRATIVE Sl/PP(I!T NASA r-- __ .... J___ -. OfF I CZ Ufo ’ ,W,.:m. ’:;ERVI C:.s J. H. DAVl.:l Rrn 109

  1. Designing Next Generation Massively Multithreaded Architectures for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Secchi, Simone; Villa, Oreste

    Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory referencemore » aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.« less

  2. General specifications for the development of a PC-based simulator of the NASA RECON system

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.

  3. Evaluation and improvement of sticky traps as monitoring tools for Glossina austeni and G. brevipalpis (Diptera: Glossinidae) in north-eastern KwaZulu-Natal, South Africa.

    PubMed

    Green, K Kappmeier; Venter, G J

    2007-12-01

    The attractiveness of various colours, colour combinations and sizes of sticky traps of the 3-dimensional trap (3DT), cross-shaped target (XT), rectangular screen (RT) and monopanels were evaluated for their efficacy to capture Glossina austeni Newstead and G. brevipalpis Newstead in north-eastern KwaZulu-Natal, South Africa. The 3-dimensional shapes of the XT and 3DT in light blue (l.blue) and white were significantly (ca. 3.1-6.9 times) better than the RT for G. austeni. On bicoloured XTs, G. austeni landed preferentially on electric blue (e.blue) (58%) and black (63%) surfaces when used with white; while for G. brevipalpis, significantly more landed on e.blue (60-66%) surfaces when used with l.blue, black or white surfaces. Increased trap size increased the catches of G. brevipalpis females and both sexes of G. austeni significantly. Temoocid and polybutene sticky materials were equally effective and remained durable for 2-3 weeks. The glossy shine of trap surfaces did not have any significant effect on the attraction and landing responses of the two species. The overall trap efficiency of the e.blue/l.blue XT was 23% for G. brevipalpis and 28% for G. austeni, and that of the e.blue/black XT was 16% for G. brevipalpis and 51% for G. austeni. Larger monopanels, painted e.blue/black on both sides, increased the catches of G. austeni females significantly by up to four times compared to the standard e.blue/black XT. This monopanel would be recommended for use as a simple and cost effective survey tool for both species in South Africa.

  4. Novel nano-particles as fillers for an experimental resin-based restorative material.

    PubMed

    Rüttermann, S; Wandrey, C; Raab, W H-M; Janda, R

    2008-11-01

    The purpose of this study is to compare the properties of two experimental materials, nano-material (Nano) and Microhybrid, and two trade products, Clearfil AP-X and Filtek Supreme XT. The flexural strength and modulus after 24h water storage and 5000 thermocycles, water sorption, solubility and X-ray opacity were determined according to ISO 4049. The volumetric behavior (DeltaV) after curing and after water storage was investigated with the Archimedes principle. ANOVA was calculated with p<0.05. Clearfil AP-X showed the highest flexural strength (154+/-14 MPa) and flexural modulus (11,600+/-550 MPa) prior to and after thermocycling (117+/-14 MPa and 13,000+/-300 MPa). The flexural strength of all materials decreased after thermocycling, but the flexural modulus decreased only for Filtek Supreme XT. After thermocycling, there were no significant differences in flexural strength and modulus between Filtek Supreme XT, Microhybrid and Nano. Clearfil AP-X had the lowest water sorption (22+/-1.1 microg mm(-3)) and Nano had the highest water sorption (82+/-2.6 microg mm(-3)) and solubility (27+/-2.9 microg mm(-3)) of all the materials. No significant differences occurred between the solubility of Clearfil AP-X, Filtek Supreme XT and Microhybrid. Microhybrid and Nano provided the highest X-ray opacity. Owing to the lower filler content, Nano showed higher shrinkage than the commercial materials. Nano had the highest expansion after water storage. After thermocycling, Nano performed as well as Filtek Supreme XT for flexural strength, even better for X-ray opacity but significantly worse for flexural modulus, water sorption and solubility. The performances of microhybrids were superior to those of the nano-materials.

  5. Autosomal dominant frontonasal dysplasia (atypical Greig syndrome): Lessons from the Xt mutant mouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunningham, M.L.; Nunes, M.E.

    1994-09-01

    Greig syndrome is the autosomal dominant association of mild hypertelorism, variable polysyndactyly, and normal intelligence. Several families have been found to have translocations or deletions of 7p13 interrupting the normal expression of GLI3 (a zinc finger, DNA binding, transcription repressor). Recently, a mutation in the mouse homologue of GLI3 was found in the extra-toes mutant mouse (Xt). The phenotypic features of this mouse model include mild hypertelorism, postaxial polydactyly of the forelimbs, preaxial polydactyly of the hindlimbs, and variable tibial hemimelia. The homozygous mutant Xt/Xt have severe frontonasal dysplasia (FND), polysyndactyly of fore-and hindlimbs and invariable tibial hemimelia. We havemore » recently evaluated a child with severe (type D) frontonasal dysplasia, fifth finger camptodactyly, preaxial polydactyly of one foot, and ispilateral tibial hemimelia. His father was born with a bifid nose, broad columnella, broad feet, and a two centimeter leg length discrepancy. The paternal grandmother of the proband is phenotypically normal; however, her fraternal twin died at birth with severe facial anomalies. The paternal great-grandmother of the proband is phenotypically normal however her niece was born with moderate ocular hypertelorism. This pedigree is suggestive of an autosomal dominant form of frontonasal dysplasia with variable expressivity. The phenotypic features of our case more closely resemble the Xt mouse than the previously defined features of Greig syndrome in humans. This suggests that a mutation in GLI3 may be responsible for FND in this family. We are currently using polymorphic dinucleotide repeat markers flanking GLI3 in a attempt to demonstrate linkage in this pedigree. Demonstration of a GLI3 mutation in this family would broaden our view of the spectrum of phenotypes possible in Greig syndrome and could provide insight into genotype/phenotype correlation in FND.« less

  6. ATLAS computing on CSCS HPC

    NASA Astrophysics Data System (ADS)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  7. Modeling high-temperature superconductors and metallic alloys on the Intel IPSC/860

    NASA Astrophysics Data System (ADS)

    Geist, G. A.; Peyton, B. W.; Shelton, W. A.; Stocks, G. M.

    Oak Ridge National Laboratory has embarked on several computational Grand Challenges, which require the close cooperation of physicists, mathematicians, and computer scientists. One of these projects is the determination of the material properties of alloys from first principles and, in particular, the electronic structure of high-temperature superconductors. While the present focus of the project is on superconductivity, the approach is general enough to permit study of other properties of metallic alloys such as strength and magnetic properties. This paper describes the progress to date on this project. We include a description of a self-consistent KKR-CPA method, parallelization of the model, and the incorporation of a dynamic load balancing scheme into the algorithm. We also describe the development and performance of a consolidated KKR-CPA code capable of running on CRAYs, workstations, and several parallel computers without source code modification. Performance of this code on the Intel iPSC/860 is also compared to a CRAY 2, CRAY YMP, and several workstations. Finally, some density of state calculations of two perovskite superconductors are given.

  8. Adaptation of MSC/NASTRAN to a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gloudeman, J.F.; Hodge, J.C.

    1982-01-01

    MSC/NASTRAN is a large-scale general purpose digital computer program which solves a wider variety of engineering analysis problems by the finite element method. The program capabilities include static and dynamic structural analysis (linear and nonlinear), heat transfer, acoustics, electromagnetism and other types of field problems. It is used worldwide by large and small companies in such diverse fields as automotive, aerospace, civil engineering, shipbuilding, offshore oil, industrial equipment, chemical engineering, biomedical research, optics and government research. The paper presents the significant aspects of the adaptation of MSC/NASTRAN to the Cray-1. First, the general architecture and predominant functional use of MSC/NASTRANmore » are discussed to help explain the imperatives and the challenges of this undertaking. The key characteristics of the Cray-1 which influenced the decision to undertake this effort are then reviewed to help identify performance targets. An overview of the MSC/NASTRAN adaptation effort is then given to help define the scope of the project. Finally, some measures of MSC/NASTRAN's operational performance on the Cray-1 are given, along with a few guidelines to help avoid improper interpretation. 17 references.« less

  9. Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations

    NASA Astrophysics Data System (ADS)

    Unekis, Michael J.; Rice, Betsy M.

    1994-12-01

    We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.

  10. Solution of matrix equations using sparse techniques

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi

    1994-01-01

    The solution of large systems of matrix equations is key to the solution of a large number of scientific and engineering problems. This talk describes the sparse matrix solver developed at Langley which can routinely solve in excess of 263,000 equations in 40 seconds on one Cray C-90 processor. It appears that for large scale structural analysis applications, sparse matrix methods have a significant performance advantage over other methods.

  11. Applications of CFD and visualization techniques

    NASA Technical Reports Server (NTRS)

    Saunders, James H.; Brown, Susan T.; Crisafulli, Jeffrey J.; Southern, Leslie A.

    1992-01-01

    In this paper, three applications are presented to illustrate current techniques for flow calculation and visualization. The first two applications use a commercial computational fluid dynamics (CFD) code, FLUENT, performed on a Cray Y-MP. The results are animated with the aid of data visualization software, apE. The third application simulates a particulate deposition pattern using techniques inspired by developments in nonlinear dynamical systems. These computations were performed on personal computers.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.« less

  13. A three-dimensional application with the numerical grid generation code: EAGLE (utilizing an externally generated surface)

    NASA Technical Reports Server (NTRS)

    Houston, Johnny L.

    1990-01-01

    Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) is a multiblock grid generation and steady-state flow solver system. This system combines a boundary conforming surface generation, a composite block structure grid generation scheme, and a multiblock implicit Euler flow solver algorithm. The three codes are intended to be used sequentially from the definition of the configuration under study to the flow solution about the configuration. EAGLE was specifically designed to aid in the analysis of both freestream and interference flow field configurations. These configurations can be comprised of single or multiple bodies ranging from simple axisymmetric airframes to complex aircraft shapes with external weapons. Each body can be arbitrarily shaped with or without multiple lifting surfaces. Program EAGLE is written to compile and execute efficiently on any CRAY machine with or without Solid State Disk (SSD) devices. Also, the code uses namelist inputs which are supported by all CRAY machines using the FORTRAN Compiler CF177. The use of namelist inputs makes it easier for the user to understand the inputs and to operate Program EAGLE. Recently, the Code was modified to operate on other computers, especially the Sun Spare4 Workstation. Several two-dimensional grid configurations were completely and successfully developed using EAGLE. Currently, EAGLE is being used for three-dimension grid applications.

  14. Ada Compiler Validation Summary Report: NATO SWG on APSE Compiler for VAX/VMS to MC68020 Version VCM1.82-02, VAX 8350 under VMS 5.4-1 with CAIS 5.5E Host Motorola MVME 133XT (MC68020 bare machine) Target

    DTIC Science & Technology

    1992-03-06

    and their respective value. Macro Parameter Macro Value SACCSIZE 32 $ AL IGNMENT 4 $COUNT-LAST 2 147 483 647 SDEFAULT KMNSIZE 2147483648 $DEFAULT-STOR...The subprogram raise..exception- Azif a raises the exception -described by the information record supplied as parameter. -In addition to the subprogram

  15. Finite-Size Scaling Analysis of Binary Stochastic Processes and Universality Classes of Information Cascade Phase Transition

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hisakado, Masato

    2015-05-01

    We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.

  16. INNOVATIVE TECHNOLOGY VERIFICATION REPORT XRF ...

    EPA Pesticide Factsheets

    The Innov-X XT400 Series (XT400) x-ray fluorescence (XRF) analyzer was demonstrated under the U.S. Environmental Protection Agency (EPA) Superfund Innovative Technology Evaluation (SITE) Program. The field portion of the demonstration was conducted in January 2005 at the Kennedy Athletic, Recreational and Social Park (KARS) at Kennedy Space Center on Merritt Island, Florida. The demonstration was designed to collect reliable performance and cost data for the XT400 analyzer and seven other commercially available XRF instruments for measuring trace elements in soil and sediment. The performance and cost data were evaluated to document the relative performance of each XRF instrument. This innovative technology verification report describes the objectives and the results of that evaluation and serves to verify the performance and cost of the XT400 analyzer. Separate reports have been prepared for the other XRF instruments that were evaluated as part of the demonstration. The objectives of the evaluation included determining each XRF instrument’s accuracy, precision, sample throughput, and tendency for matrix effects. To fulfill these objectives, the field demonstration incorporated the analysis of 326 prepared samples of soil and sediment that contained 13 target elements. The prepared samples included blends of environmental samples from nine different sample collection sites as well as spiked samples with certified element concentrations. Accuracy was as

  17. Testing of PVODE, a parallel ODE solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittman, M.R.

    1996-08-09

    The purpose of this paper is to discuss the issues involved with, and the results from, testing of two example programs that use PVODE: pvkx and pvnx. These two programs are intended to provide a template for users for follow when writing their own code. However, we also used them (primarily pvkx) to do performance testing and visualization. This work was done on a Cray T3D, a Sparc 10, and a Sparc 5.

  18. MAGNA (Materially and Geometrically Nonlinear Analysis). Part I. Finite Element Analysis Manual.

    DTIC Science & Technology

    1982-12-01

    provided for operating the program, modifying storage caoacity, preparing input data, estimating computer run times , and interpreting the output...7.1.3 Reserved File Names 7.1.16 7.1.4 Typical Execution Times on CDC Computers 7.1.18 7.2 CRAY PROGRAM VERSION 7.2.1 7.2.1 Job Control Language 7.2.1...7.2.2 Modification of Storage Capacity 7.2.8 7.2.3 Execution Times on the CRAY-I Computer 7.2.12 7.3 VAX PROGRAM VERSION 7.3.1 8 INPUT DATA 8.0.1 8.1

  19. Construction of a fast, inexpensive rapid-scanning diode-array detector and spectrometer.

    PubMed

    Carter, T P; Baek, H K; Bonninghausen, L; Morris, R J; van Wart, H E

    1990-10-01

    A 512-element diode-array spectroscopic detection system capable of acquiring multiple spectra at a rate of 5 ms per spectrum with an effective scan rate of 102.9 kHz has been constructed. Spectra with fewer diode elements can also be acquired at scan rates up to 128 kHz. The detector utilizes a Hamamatsu silicon photodiode-array sensor that is interfaced to Hamamatsu driver/amplifier and clock generator boards and a DRA laboratories 12-bit 160-kHz analog-to-digital converter. These are standard, commercially available devices which cost approximately $3500. The system is interfaced to and controlled by an IBM XT microcomputer. Detailed descriptions of the home-built detector housing and control/interface circuitry are presented and its application to the study of the reaction of horseradish peroxidase with hydrogen peroxide is demonstrated.

  20. Novel orthodontic cement containing dimethylaminohexadecyl methacrylate with strong antibacterial capability.

    PubMed

    Feng, Xiaodong; Zhang, Ning; Xu, Hockin H K; Weir, Michael D; Melo, Mary Anne S; Bai, Yuxing; Zhang, Ke

    2017-09-26

    Orthodontic treatments increase the incidence of white spot lesions. The objectives of this study were to develop an antibacterial orthodontic cement to inhibit demineralization, and to evaluate its enamel shear bond strength and anti-biofilm properties. Novel antibacterial monomer dimethylaminohexadecyl methacrylate (DMAHDM) was synthesized and incorporated into Transbond XT at 0, 1.5 and 3% by mass. Anti-biofilm activity was assessed using a human dental plaque microcosm biofilm model. Shear bond strength and adhesive remnant index were also tested. Biofilm activity precipitously dropped when contacting orthodontic cement with DMAHDM. Orthodontic cement containing 3% DMAHDM significantly reduced biofilm metabolic activity and lactic acid production (p<0.05), and decreased biofilm colony-forming unit (CFU) by two log. Water-aging for 90 days had no adverse influence on enamel shear bond strength (p>0.1). By incorporating DMAHDM into Transbond XT for the first time, the modified orthodontic cement obtained a strong antibacterial capability without compromising the enamel bond strength.

  1. Does Undersizing of Transcatheter Aortic Valve Bioprostheses during Valve-in-Valve Implantation Avoid Coronary Obstruction? An In Vitro Study.

    PubMed

    Stock, Sina; Scharfschwerdt, Michael; Meyer-Saraei, Roza; Richardt, Doreen; Charitos, Efstratios I; Sievers, Hans-Hinrich; Hanke, Thorsten

    2017-04-01

    Background  The transcatheter aortic valve-in-valve implantation (TAViVI) is an evolving treatment strategy for degenerated surgical aortic valve bioprostheses (SAVBs) in patients with high operative risk. Although hemodynamics is excellent, there is some concern regarding coronary obstruction, especially in SAVB with externally mounted leaflet tissue, such as the Trifecta (St. Jude Medical Inc., St. Paul, Minnesota, United States). We investigated coronary flow and hydrodynamics before and after TAViVI in a SAVB with externally mounted leaflet tissue (St. Jude Medical, Trifecta) with an undersized transcatheter aortic valve bioprosthesis (Edwards Sapien XT; Edwards Lifesciences LLC, Irvine, California, United States) in an in vitro study. Materials and Methods  An aortic root model was constructed incorporating geometric dimensions known as risk factors for coronary obstruction. Investigating the validity of this model, we primarily performed recommended TAViVI with the Sapien XT (size 26 mm) in a Trifecta (size 25 mm) in a mock circulation. Thereafter, hydrodynamic performance and coronary flow (left/right coronary diastolic flow [lCF/rCF]) after TAViVI with an undersized Sapien XT (size 23 mm) in a Trifecta (size 25 mm) were investigated at two different coronary ostia heights (COHs, 8 and 10 mm). Results  Validation of the model led to significant coronary obstruction ( p  < 0.001). Undersized TAViVI showed no significant reduction with respect to coronary flow (lCF: COH 8 mm, 0.90-0.87 mL/stroke; COH 10 mm, 0.89-0.82 mL/stroke and rCF: COH 8 mm, 0.64-0.60 mL/stroke; COH 10 mm, 0.62-0.58 mL/stroke). Mean transvalvular gradients (4-5 mm Hg, p  < 0.001) increased significantly after TAViVI. Conclusions  In our in vitro model, undersized TAViVI with the balloon-expandable Sapien XT into a modern generation SAVB (Trifecta) successfully avoided coronary flow obstruction. Georg Thieme Verlag KG Stuttgart · New York.

  2. Nonparametric Statistics Test Software Package.

    DTIC Science & Technology

    1983-09-01

    statis- tics because of their acceptance in the academic world, the availability of computer support, and flexibility in model builling. Nonparametric...25 I1l,lCELL WRITE(NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT

  3. Research on Spectroscopy, Opacity, and Atmospheres

    NASA Technical Reports Server (NTRS)

    Kurucz, Robert L.

    1999-01-01

    To make my calculations more readily accessible I have set up a web site cfaku5.harvard.edu that can also be accessed by FTP. it has 5 9GB disks that hold all of my atomic and diatomic molecular data, my tables of distribution function opacities, my grids of model atmospheres, colors, fluxes, etc, my program that are ready for distribution, most of my recent papers. Atlases and computed spectra will be added as they are completed. New atomic and molecular calculations will be added as they are completed. I got my atomic programs that had been running on a Cray at the San Diego Supercomputer Center to run on my Vaxes and Alpha. I started with Ni and Co because there were new laboratory analyses that included isotopic and hyperfine splitting. Those calculations are described in the appended abstract for the 6th Atomic Spectroscopy and oscillator Strengths meeting in Victoria last summer. A surprising finding is that quadrupole transitions have been grossly in error because mixing with higher levels has not been included. I now have enough memory in my Alpha to treat 3000 x 3000 matrices. I now include all levels up through n=9 for Fe I and 11, the spectra for which the most information is available. I am finishing those calculations right now. After Fe I and Fe 11, all other spectra are "easy", and I will be in mass production. ATL;LS12, my opacity sampling program for computing models with arbitrary abundances, has been put on the web server. I wrote a new distribution function opacity program for workstations that replaces the one I used on the Cray at the San Diego Supercomputer Center. Each set of abundances would take 100 Cray hours costing $100,000. 1 ran 25 cases. Each of my opacity CDs contains three abundances. I have a new program -iinning on the Alpha that takes about a week. I am going to have to get a faster processor or I will have to dedicate a whole workstation just to opacities.

  4. Physical-Mechanical Properties of a Fiber-Reinforced Composite Based on an ELUR-P Carbon Tape and XT-118 Binder

    NASA Astrophysics Data System (ADS)

    Paimushin, V. N.; Kholmogorov, S. A.

    2018-03-01

    A series of tests to identify the physical-mechanical properties of a unidirectional carbon-fiber-reinforced composite based on an ELUR-P carbon fibers and an XT-118 epoxy binder were performed. The form of the stress-strain diagrams of specimens loaded in tension in the longitudinal, transverse, and ±45° directions and in compression in the longitudinal and ±45° directions were examined. Tensile diagrams were also determined for the XT-118 binder alone. The relation between the tangential shear modulus and shear strains of the composite was highly nonlinear from the very beginning of loading and depended on the loading type. Such a nonlinear response of the carbon-fiber-reinforced composite in shear cannot be the result of plastic deformation of binder, but can be explained only by structural changes caused by the inner buckling instability of the composite at micro- and mesolevels..

  5. The Hopper System: How the Largest XE6 in the World Went From Requirements to Reality.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antypas, Katie; Butler, Tina; Carter, Jonathan

    This paper will discuss the entire process of acquiring and deploying Hopper from the first vendor market surveys to providing 3.8 million hours of production cycles per day for NERSC users. Installing the latest system at NERSC has been both a logistical and technical adventure. Balancing compute requirements with power, cooling, and space limitations drove the initial choice and configuration of the XE6, and a number of first-of- a-kind features implemented in collaboration with Cray have resulted in a high performance, usable, and reliable system.

  6. Supercomputer analysis of purine and pyrimidine metabolism leading to DNA synthesis.

    PubMed

    Heinmets, F

    1989-06-01

    A model-system is established to analyze purine and pyrimidine metabolism leading to DNA synthesis. The principal aim is to explore the flow and regulation of terminal deoxynucleoside triophosphates (dNTPs) in various input and parametric conditions. A series of flow equations are established, which are subsequently converted to differential equations. These are programmed (Fortran) and analyzed on a Cray chi-MP/48 supercomputer. The pool concentrations are presented as a function of time in conditions in which various pertinent parameters of the system are modified. The system is formulated by 100 differential equations.

  7. Do Some X-ray Stars Have White Dwarf Companions?

    NASA Technical Reports Server (NTRS)

    McCollum, Bruce

    1995-01-01

    Some Be stars which are intermittent C-ray sources may have white dwarf companions rather than neutron stars. It is not possible to prove or rule out the existence of Be+WD systems using X-ray or optical data. However, the presence of a white dwarf could be established by the detection of its EUV continuum shortward of the Be star's continuum turnover at 1OOOA. Either the detection or the nondetection of Be+WD systems would have implications for models of Be star variability, models of Be binary system formation and evolution, and models of wind-fed accretion.

  8. Deployment of an Advanced Electrocardiographic Analysis (A-ECG) to Detect Cardiovascular Risk in Career Firefighters

    NASA Technical Reports Server (NTRS)

    Dolezal, B. A.; Storer, T. W.; Abrazado, M.; Watne, R.; Schlegel, T. T.; Batalin, M.; Kaiser, W.; Smith, D. L.; Cooper, C. B.

    2011-01-01

    INTRODUCTION Sudden cardiac death is the leading cause of line of duty death among firefighters, accounting for approximately 45% of fatalities annually. Firefighters perform strenuous muscular work while wearing heavy, encapsulating personal protective equipment in high ambient temperatures, under chaotic and emotionally stressful conditions. These factors can precipitate sudden cardiac events like myocardial infarction, serious dysrhythmias, or cerebrovascular accidents in firefighters with underlying cardiovascular disease. Screening for cardiovascular risk factors is recommended but not always followed in this population. PHASER is a project charged with identifying and prioritizing risk factors in emergency responders. We have deployed an advanced ECG (A-ECG) system developed at NASA for improved sensitivity and specificity in the detection of cardiac risk. METHODS Forty-four professional firefighters were recruited to perform comprehensive baseline assessments including tests of aerobic performance and laboratory tests for fasting lipid profiles and glucose. Heart rate and conventional 12-lead ECG were obtained at rest and during incremental treadmill exercise testing (XT). In addition, a 5-min resting 12-lead A-ECG was obtained in a subset of firefighters (n=18) and transmitted over a secure networked system to a physician collaborator at NASA for advanced-ECG analysis. This A-ECG system has been proven, using myocardial perfusion and other imaging, to accurately identify a number of cardiac pathologies including coronary artery disease (CAD), left ventricular hypertrophy, hypertrophic cardiomyopathy, non-ischemic cardiomyopathy, and ischemic cardiomyopathy. RESULTS Subjects mean (SD) age was 43 (8) years, weight 91 (13) kg, and BMI of 28 (3) kg/square meter. Maximum oxygen uptake (VO2max) was 39 (9) ml/kg/min. This compares with the 45th %ile in healthy reference values and a recommended standard of 42 ml/kg/min for firefighters. The metabolic threshold (VO2Theta) above which lactate accumulates was 23 (8) ml/kg/min. The chronotropic index, a measure of cardiovascular strain during XT was 35 (8) /L compared with reference values for men of 40 /L. Total cholesterol, LDL-C and HDL-C were 202 (34),126 (29), and 55 (15) mg/dl, respectively. Fifty-one percent of subjects had .3 cardiovascular risk factors, 2 subjects had resting hypertension (BP.140/90), and 23 had pre-hypertension (.120/80 but <140/90). Seven had exaggerated exercise induced hypertension but only one had ST depression on XT ECG, at least one positive A-ECG score for CAD, and documented CAD based on cardiology referral. While all other subjects, including those with fewer risk factors, higher aerobic fitness, and normal exercise ECGs, were classified as healthy by A-ECG, there was no trend for association between risk factors and any of 20 A-ECG parameters in the grouped data. CONCLUSIONS A-ECG screening correctly identified the individual with CAD although there was no trend for A-ECG parameters to distinguish those with elevated BP or multiple risk factors but normal XT ECG. We have demonstrated that a new technology, advanced-ECG, can be introduced for remote firefighter risk assessment. This simple, time and cost-effective approach to risk identification that can be acquired remotely and transmitted securely can detect individuals potentially at risk for line-of-duty death. Additional research is needed to further document its value.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barton, G.W. Jr.

    In UCID-19588, Communicating between the Apple and the Wang, we described how to take Apple DOS text files and send them to the Wang, and how to return Wang files to the Apple. It is also possible to use your Apple as an Octopus terminal, and to exchange files with Octopus 7600's. Presumably, you can also talk to the Crays, or any other part of the system. This connection has another virtue. It eliminates one of the terminals in your office.

  10. LAMPS software

    NASA Technical Reports Server (NTRS)

    Perkey, D. J.; Kreitzberg, C. W.

    1984-01-01

    The dynamic prediction model along with its macro-processor capability and data flow system from the Drexel Limited-Area and Mesoscale Prediction System (LAMPS) were converted and recorded for the Perkin-Elmer 3220. The previous version of this model was written for Control Data Corporation 7600 and CRAY-1a computer environment which existed until recently at the National Center for Atmospheric Research. The purpose of this conversion was to prepare LAMPS for porting to computer environments other than that encountered at NCAR. The emphasis was shifted from programming tasks to model simulation and evaluation tests.

  11. Diffusion in random networks

    DOE PAGES

    Zhang, Duan Z.; Padrino, Juan C.

    2017-06-01

    The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less

  12. Parallel implementation of a Lagrangian-based model on an adaptive mesh in C++: Application to sea-ice

    NASA Astrophysics Data System (ADS)

    Samaké, Abdoulaye; Rampal, Pierre; Bouillon, Sylvain; Ólason, Einar

    2017-12-01

    We present a parallel implementation framework for a new dynamic/thermodynamic sea-ice model, called neXtSIM, based on the Elasto-Brittle rheology and using an adaptive mesh. The spatial discretisation of the model is done using the finite-element method. The temporal discretisation is semi-implicit and the advection is achieved using either a pure Lagrangian scheme or an Arbitrary Lagrangian Eulerian scheme (ALE). The parallel implementation presented here focuses on the distributed-memory approach using the message-passing library MPI. The efficiency and the scalability of the parallel algorithms are illustrated by the numerical experiments performed using up to 500 processor cores of a cluster computing system. The performance obtained by the proposed parallel implementation of the neXtSIM code is shown being sufficient to perform simulations for state-of-the-art sea ice forecasting and geophysical process studies over geographical domain of several millions squared kilometers like the Arctic region.

  13. x-y-recording in transmission electron microscopy. A versatile and inexpensive interface to personal computers with application to stereology.

    PubMed

    Rickmann, M; Siklós, L; Joó, F; Wolff, J R

    1990-09-01

    An interface for IBM XT/AT-compatible computers is described which has been designed to read the actual specimen stage position of electron microscopes. The complete system consists of (i) optical incremental encoders attached to the x- and y-stage drivers of the microscope, (ii) two keypads for operator input, (iii) an interface card fitted to the bus of the personal computer, (iv) a standard configuration IBM XT (or compatible) personal computer optionally equipped with a (v) HP Graphic Language controllable colour plotter. The small size of the encoders and their connection to the stage drivers by simple ribbed belts allows an easy adaptation of the system to most electron microscopes. Operation of the interface card itself is supported by any high-level language available for personal computers. By the modular concept of these languages, the system can be customized to various applications, and no computer expertise is needed for actual operation. The present configuration offers an inexpensive attachment, which covers a wide range of applications from a simple notebook to high-resolution (200-nm) mapping of tissue. Since section coordinates can be processed in real-time, stereological estimations can be derived directly "on microscope". This is exemplified by an application in which particle numbers were determined by the disector method.

  14. System and method for detection of dispersed broadband signals

    DOEpatents

    Qian, S.; Dunham, M.E.

    1999-06-08

    A system and method for detecting the presence of dispersed broadband signals in real time are disclosed. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos[l brace]2[phi](t)[r brace]. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase [phi](t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of [phi][prime](t). 10 figs.

  15. System and method for detection of dispersed broadband signals

    DOEpatents

    Qian, Shie; Dunham, Mark E.

    1999-06-08

    A system and method for detecting the presence of dispersed broadband signals in real time. The present invention utilizes a bank of matched filters for detecting the received dispersed broadband signals. Each matched filter uses a respective robust time template that has been designed to approximate the dispersed broadband signals of interest, and each time template varies across a spectrum of possible dispersed broadband signal time templates. The received dispersed broadband signal x(t) is received by each of the matched filters, and if one or more matches occurs, then the received data is determined to have signal data of interest. This signal data can then be analyzed and/or transmitted to Earth for analysis, as desired. The system and method of the present invention will prove extremely useful in many fields, including satellite communications, plasma physics, and interstellar research. The varying time templates used in the bank of matched filters are determined as follows. The robust time domain template is assumed to take the form w(t)=A(t)cos{2.phi.(t)}. Since the instantaneous frequency f(t) is known to be equal to the derivative of the phase .phi.(t), the trajectory of a joint time-frequency representation of x(t) is used as an approximation of .phi.'(t).

  16. Treating childhood intermittent distance exotropia: a qualitative study of decision making.

    PubMed

    Lecouturier, Jan; Clarke, Michael P; Errington, Gail; Hallowell, Nina; Murtagh, Madeleine J; Thomson, Richard

    2015-08-22

    Engaging patients (parents/families) in treatment decisions is increasingly recognised as important and beneficial. Yet where the evidence base for treatment options is limited, as with intermittent distance exotropia (X(T)), this presents a challenge for families and clinicians. The purpose of this study was to explore how decisions are made in the management and treatment of X(T) and what can be done to support decision-making for clinicians, parents and children. This was a qualitative study using face to face interviews with consultant ophthalmologists and orthoptists, and parents of children with X(T). Interview data were analysed using the constant comparative method. The drivers for clinicians in treatment decision-making for X(T) were the proportion of time the strabismus is manifest and parents' views. For parents, decisions were influenced by: fear of bullying and, to a lesser degree, concerns around the impact of the strabismus on their child's vision. Uncertainty around the effectiveness of treatment options caused difficulties for some clinicians when communicating with parents. Parental understanding of the nature of X(T) and rationale for treatment often differed from that of the clinicians, and this affected their involvement in decision-making. Though there were good examples of shared decision-making and parent and child engagement some parents said the process felt rushed and they felt excluded. Parents reported that clinicians provided sufficient information in consultations but they had difficulties in retaining verbal information to convey to other family members. Overall parents were happy with the care their child received but there is scope for better parent and (where appropriate) child engagement in decision-making. There was an expressed need for written information about X(T) to reinforce what was given verbally in consultations and to share with other family members. Access could be via the hospital website, along with videos or blogs from parents and children who have undergone the various management options. A method of assisting clinicians to explain the treatment options, together with the uncertainties, in a clear and concise way could be of particular benefit to orthoptists who have the most regular contact with parents and children, and are more likely to suggest conservative treatments such as occlusion and minus lenses.

  17. Sustained Effects of Ecstasy on the Human Brain: A Prospective Neuroimaging Study in Novel Users

    ERIC Educational Resources Information Center

    de Win, Maartje M. L.; Jager, Gerry; Booij, Jan; Reneman, Liesbeth; Schilt, Thelma; Lavini, Christina; Olabarriaga, Silvia D.; den Heeten, Gerard J.; van den Brink, Wim

    2008-01-01

    Previous studies have suggested toxic effects of recreational ecstasy use on the serotonin system of the brain. However, it cannot be excluded that observed differences between users and non-users are the cause rather than the consequence of ecstasy use. As part of the Netherlands XTC Toxicity (NeXT) study, we prospectively assessed sustained…

  18. The USL NASA PC R and D development environment standards

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    The development environment standards which have been established in order to control usage of the IBM PC/XT development systems and to prevent interference between projects being currently developed on the PC's are discussed. The standards address the following areas: scheduling PC resources; login/logout procedures; training; file naming conventions; hard disk organization; diskette care; backup procedures; and copying policies.

  19. New superfield extension of Boussinesq and its (x,t) interchanged equation from odd Poisson bracket

    NASA Astrophysics Data System (ADS)

    Palit, S.; Chowdhury, A. Roy

    1995-08-01

    A new superfield extension of the Boussinesq equation and its corresponding (x,t) interchanged variant are deduced from the odd Poisson-bracket-formalism, which is similar to the antibracket of Batalin and Vilkovisky. In the former case we obtain the equation deduced by Figueroa-O'Farrill et al from a different approach. In each case we have deduced the bi-Hamiltonian structure and some basic symmetries associated with them.

  20. Effect of Nanoscale Fillers on the Local Mechanical Behavior of Polymer Nanocomposites

    DTIC Science & Technology

    2009-12-01

    the interparticle spacing can be related to the particle volume fraction, vp, and the particle diameter, d, by [2] r = d 6v„ d) I I (b) Figure...VGCNFs), namely as- fabricated (PR-24-XT PS), high temperature hear treated (PR-24-XT-HHT-LD), and high temperature hear treated and oxidatively...Therefore, hear treatment had a severe effect on the fracture toughness of composites with embedded VGCNFs. This agrees with composite level

  1. Expert systems identify fossils and manage large paleontological databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beightol, D.S.; Conrad, M.A.

    EXPAL is a computer program permitting creation and maintenance of comprehensive databases in marine paleontology. It is designed to assist specialists and non-specialists. EXPAL includes a powerful expert system based on the morphological descriptors specific to a given group of fossils. The expert system may be used, for example, to describe and automatically identify an unknown specimen. EXPAL was first applied to Dasycladales (Calcareous green algae). Projects are under way for corresponding expert systems and databases on planktonic foraminifers and calpionellids. EXPAL runs on an IBM XT or compatible microcomputer.

  2. Personal Computer System for Automatic Coronary Venous Flow Measurement

    PubMed Central

    Dew, Robert B.

    1985-01-01

    We developed an automated system based on an IBM PC/XT Personal computer to measure coronary venous blood flow during cardiac catheterization. Flow is determined by a thermodilution technique in which a cold saline solution is infused through a catheter into the coronary venous system. Regional temperature fluctuations sensed by the catheter are used to determine great cardiac vein and coronary sinus blood flow. The computer system replaces manual methods of acquiring and analyzing temperature data related to flow measurement, thereby increasing the speed and accuracy with which repetitive flow determinations can be made.

  3. Parallel performance of TORT on the CRAY J90: Model and measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of themore » code.« less

  4. Effect of Energy Drinks on Discoloration of Silorane and Dimethacrylate-Based Composite Resins.

    PubMed

    Ahmadizenouz, Ghazaleh; Esmaeili, Behnaz; Ahangari, Zohreh; Khafri, Soraya; Rahmani, Aghil

    2016-08-01

    This study aimed to assess the effects of two energy drinks on color change (ΔE) of two methacrylate-based and a silorane-based composite resin after one week and one month. Thirty cubic samples were fabricated from Filtek P90, Filtek Z250 and Filtek Z350XT composite resins. All the specimens were stored in distilled water at 37°C for 24 hours. Baseline color values (L*a*b*) of each specimen were measured using a spectrophotometer according to the CIEL*a*b* color system. Ten randomly selected specimens from each composite were then immersed in the two energy drinks (Hype, Red Bull) and artificial saliva (control) for one week and one month. Color was re-assessed after each storage period and ΔE values were calculated. The data were analyzed using the Kruskal Wallis and Mann-Whitney U tests. Filtek Z250 composite showed the highest ΔE irrespective of the solutions at both time points. After seven days and one month, the lowest ΔE values were observed in Filtek Z350XT and Filtek P90 composites immersed in artificial saliva, respectively. The ΔE values of Filtek Z250 and Z350XT composites induced by Red Bull and Hype energy drinks were not significantly different. Discoloration of Filtek P90 was higher in Red Bull energy drink at both time points. Prolonged immersion time in all three solutions increased ΔE values of all composites. However, the ΔE values were within the clinically acceptable range (<3.3) at both time points.

  5. Standardized methodology for transfemoral transcatheter aortic valve replacement with the Edwards Sapien XT valve under fluoroscopy guidance.

    PubMed

    Kasel, Albert M; Shivaraju, Anupama; Schneider, Stephan; Krapf, Stephan; Oertel, Frank; Burgdorf, Christof; Ott, Ilka; Sumer, Christian; Kastrati, Adnan; von Scheidt, Wolfgang; Thilo, Christian

    2014-09-01

    To provide a simplified, standardized methodology for a successful transfemoral transcatheter aortic valve replacement (TAVR) procedure with the Sapien XT valve in patients with severe aortic stenosis (AS). TAVR is currently reserved for patients with severe, symptomatic AS who are inoperable or at high operative risk. In many institutions, TAVR is performed under general anesthesia with intubation or with conscious sedation. In addition, many institutions still use transesophageal echo (TEE) during the procedure for aortic root angulations and positioning of the valve prior to implantation. Methods. We enrolled 100 consecutive patients (mean age, 80 ± 7 years; range, 50-94 years; female n=59) with severe symptomatic AS. Annulus measurements were based on computed tomography angiograms. All patients underwent fluoroscopy-guided transfemoral TAVR with little to no sedation and without simultaneous TEE. TAVR was predominantly performed with the use of local and central analgesics; only 36% of our cohort received conscious sedation. Procedural success of TAVR was 99%. Transthoracic echocardiography before discharge excluded aortic regurgitation (AR) >2 in all patients (AR >1; n=6). In-hospital stroke rate was 6%. The vessel closure system was successfully employed in 96%. Major vascular complication rate was 1%. The 30-day mortality was 2%. Fluoroscopy-guided TAVR with the use of just analgesics with or without conscious sedation is safe and effective, and this potentially enables a more time-effective and cost-effective procedure. This paper provides simplified, stepwise guidance on how to perform transfemoral TAVR with the Sapien XT valve.

  6. Parallel Flux Tensor Analysis for Efficient Moving Object Detection

    DTIC Science & Technology

    2011-07-01

    computing as well as parallelization to enable real time performance in analyzing complex video [3, 4 ]. There are a number of challenging computer vision... 4 . TITLE AND SUBTITLE Parallel Flux Tensor Analysis for Efficient Moving Object Detection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...We use the trace of the flux tensor matrix, referred to as Tr JF , that is defined below, Tr JF = ∫ Ω W (x− y)(I2xt(y) + I2yt(y) + I2tt(y))dy ( 4 ) as

  7. Investigation of Differences Between Measured and Predicted Pressures in AEDC/VKF Hypersonic Tunnel B

    DTIC Science & Technology

    1997-01-01

    coordinates are presented in Fig. 4b. The primary calibration data used in this paper is derived from the rake . The 42 pitot probes cov- ered a range...the lateral (YT) direction. Figures 5 and 6 show examples of the pitot pressure and total temperature rake data from a lateral survey and American...Figure 5. Rake pitot measurements, XT = 16 in. 0 10 YT, in. b. Local Mach number 20 Total Temperature Contours, R Static Temperature Contours, R

  8. Application of Deep Learning Architectures for Accurate and Rapid Detection of Internal Mechanical Damage of Blueberry Using Hyperspectral Transmittance Data.

    PubMed

    Wang, Zhaodi; Hu, Menghan; Zhai, Guangtao

    2018-04-07

    Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit.

  9. Application of Deep Learning Architectures for Accurate and Rapid Detection of Internal Mechanical Damage of Blueberry Using Hyperspectral Transmittance Data

    PubMed Central

    Hu, Menghan; Zhai, Guangtao

    2018-01-01

    Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit. PMID:29642454

  10. Transcatheter Heart Valve Selection and Permanent Pacemaker Implantation in Patients With Pre-Existent Right Bundle Branch Block.

    PubMed

    van Gils, Lennart; Tchetche, Didier; Lhermusier, Thibault; Abawi, Masieh; Dumonteil, Nicolas; Rodriguez Olivares, Ramón; Molina-Martin de Nicolas, Javier; Stella, Pieter R; Carrié, Didier; De Jaegere, Peter P; Van Mieghem, Nicolas M

    2017-03-03

    Right bundle branch block is an established predictor for new conduction disturbances and need for a permanent pacemaker (PPM) after transcatheter aortic valve replacement. The aim of the study was to evaluate the absolute rates of transcatheter aortic valve replacement related PPM implantations in patients with pre-existent right bundle branch block and categorize for different transcatheter heart valves. We pooled data on 306 transcatheter aortic valve replacement patients from 4 high-volume centers in Europe and selected those with right bundle branch block at baseline without a previously implanted PPM. Logistic regression was used to evaluate whether PPM rate differed among transcatheter heart valves after adjustment for confounders. Mean age was 83±7 years and 63% were male. Median Society of Thoracic Surgeons score was 6.3 (interquartile range, 4.1-10.2). The following transcatheter valve designs were used: Medtronic CoreValve (n=130; Medtronic, Minneapolis, MN); Edwards Sapien XT (ES-XT; n=124) and Edwards Sapien 3 (ES-3; n=32; Edwards Lifesciences, Irvine, CA); and Boston Scientific Lotus (n=20; Boston Scientific Corporation, Marlborough, MA). Overall permanent pacemaker implantation rate post-transcatheter aortic valve replacement was 41%, and per valve design: 75% with Lotus, 46% with CoreValve, 32% with ES-XT, and 34% with ES-3. The indication for PPM implantation was total atrioventricular block in 98% of the cases. Lotus was associated with a higher PPM rate than all other valves. PPM rate did not differ between ES-XT and ES-3. Ventricular paced rhythm at 30-day and 1-year follow-up was present in 81% at 89%, respectively. Right bundle branch block at baseline is associated with a high incidence of PPM implantation for all transcatheter heart valves. PPM rate was highest for Lotus and lowest for ES-XT and ES-3. Pacemaker dependency remained high during follow-up. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  11. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  12. Deciding alternative left turn signal phases using expert systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, E.C.P.

    1988-01-01

    The Texas Transportation Institute (TTI) conducted a study to investigate the feasibility of applying artificial intelligence (AI) technology and expert systems (ES) design concepts to a traffic engineering problem. Prototype systems were developed to analyze user input, evaluate various reasoning, and suggest suitable left turn phase treatment. These systems were developed using AI programming tools on IBM PC/XT/AT-compatible microcomputers. Two slightly different systems were designed using AI languages; another was built with a knowledge engineering tool. These systems include the PD PROLOG and TURBO PROLOG AI programs, as well as the INSIGHT Production Rule Language.

  13. Exploring Accelerating Science Applications with FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storaasli, Olaf O; Strenski, Dave

    2007-01-01

    FPGA hardware and tools (VHDL, Viva, MitrionC and CHiMPS) are described. FPGA performance is evaluated on two Cray XD1 systems (Virtex-II Pro 50 and Virtex-4 LX160) for human genome (DNA and protein) sequence comparisons for a computational biology code (FASTA). Scalable FPGA speedups of 50X (Virtex-II) and 100X (Virtex-4) over a 2.2 GHz Opteron were achieved. Coding and IO issues faced for human genome data are described.

  14. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  15. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE PAGES

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...

    2017-03-08

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  16. A new milling machine for computer-aided, in-office restorations.

    PubMed

    Kurbad, Andreas

    Chairside computer-aided design/computer-aided manufacturing (CAD/CAM) technology requires an effective technical basis to obtain dental restorations with optimal marginal accuracy, esthetics, and longevity in as short a timeframe as possible. This article describes a compact, 5-axis milling machine based on an innovative milling technology (5XT - five-axis turn-milling technique), which is capable of achieving high-precision milling results within a very short processing time. Furthermore, the device's compact dimensioning and state-of-the-art mode of operation facilitate its use in the dental office. This model is also an option to be considered for use in smaller dental laboratories, especially as the open input format enables it to be quickly and simply integrated into digital processing systems already in use. The possibility of using ceramic and polymer materials with varying properties enables the manufacture of restorations covering all conceivable indications in the field of fixed dental prosthetics.

  17. Influence of Er:YAG and Ti:sapphire laser irradiation on the microtensile bond strength of several adhesives to dentin.

    PubMed

    Portillo, M; Lorenzo, M C; Moreno, P; García, A; Montero, J; Ceballos, L; Fuentes, M V; Albaladejo, A

    2015-02-01

    The aim of the present study was to evaluate the influence of erbium:yttrium-aluminum-garnet (Er:YAG) and Ti:sapphire laser irradiation on the microtensile bond strength (MTBS) of three different adhesive systems to dentin. Flat dentin surfaces from 27 molars were divided into three groups according to laser irradiation: control, Er:YAG (2,940 nm, 100 μs, 2.7 W, 9 Hz) and Ti:sapphire laser (795 nm, 120 fs, 1 W, 1 kHz). Each group was divided into three subgroups according to the adhesive system used: two-step total-etching adhesive (Adper Scotchbond 1 XT, from now on XT), two-step self-etching adhesive (Clearfil SE Bond, from now on CSE), and all-in-one self-etching adhesive (Optibond All-in-One, from now on OAO). After 24 h of water storage, beams of section at 1 mm(2) were longitudinally cut from the samples. Each beam underwent traction test in an Instron machine. Fifteen polished dentin specimens were used for the surface morphology analysis by scanning electron microscopy (SEM). Failure modes of representative debonded microbars were SEM-assessed. Data were analyzed by ANOVA, chi-square test, and multiple linear regression (p < 0.05). In the control group, XT obtained higher MTBS than that of laser groups that performed equally. CSE showed higher MTBS without laser than that with laser groups, where Er:YAG attained higher MTBS than ultrashort laser. When OAO was used, MTBS values were equal in the three treatments. CSE obtained the highest MTBS regardless of the surface treatment applied. The Er:YAG and ultrashort laser irradiation reduce the bonding effectiveness when a two-step total-etching adhesive or a two-step self-etching adhesive are used and do not affect their effectiveness when an all-in-one self-etching adhesive is applied.

  18. GUMICS4 Synthetic and Dynamic Simulations of the ECLAT Project

    NASA Astrophysics Data System (ADS)

    Facsko, G.; Palmroth, M. M.; Gordeev, E.; Hakkinen, L. V.; Honkonen, I. J.; Janhunen, P.; Sergeev, V. A.; Kauristie, K.; Milan, S. E.

    2012-12-01

    The European Commission funded the European Cluster Assimilation Techniques (ECLAT) project as a collaboration of five leader European universities and research institutes. A main contribution of the Finnish Meteorological Institute (FMI) is to provide a wide range of global MHD runs with the Grand Unified Magnetosphere Ionosphere Coupling simulation (GUMICS). The runs are divided in two categories: synthetic runs investigating the extent of solar wind drivers that can influence magnetospheric dynamics, as well as dynamic runs using measured solar wind data as input. Here we consider the first set of runs with synthetic solar wind input. The solar wind density, velocity and the interplanetary magnetic field had different magnitudes and orientations; furthermore two F10.7 flux values were selected for solar radiation minimum and maximum values. The solar wind parameter values were constant such that a constant stable solution was archived. All configurations were run several times with three different (-15°, 0°, +15°) tilt angles in the GSE X-Z plane. The Cray XT supercomputer of the FMI provides a unique opportunity in global magnetohydrodynamic simulation: running the GUMICS-4 based on one year real solar wind data. Solar wind magnetic field, density, temperature and velocity data based on Advanced Composition Explorer (ACE) and WIND measurements are downloaded from the OMNIWeb open database and a special input file is created for each Cluster orbit. All data gaps are replaced with linear interpolations between the last and first valid data values before and after the data gap. Minimum variance transformation is applied for the Interplanetary Magnetic Field data to clean and avoid the code of divergence. The Cluster orbits are divided into slices allowing parallel computation and each slice has an average tilt angle value. The file timestamps start one hour before the perigee to provide time for building up a magnetosphere in the simulation space. The real measurements were extrapolated into one minute intervals by the database and the time steps of the simulation result are shifted by 20-30 minutes calculated from the spacecraft position and the actual solar wind velocity. All simulation results are saved every 5th minutes (in calculation time). The result of the 162 simulations named so called "synthetic run library" were visualized and uploaded to the homepage of the FMI after validation as well as the year run savings. Here we present details of these runs.

  19. Nonzero solutions of nonlinear integral equations modeling infectious disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, L.R.; Leggett, R.W.

    1982-01-01

    Sufficient conditions to insure the existence of periodic solutions to the nonlinear integral equation, x(t) = ..integral../sup t//sub t-tau/f(s,x(s))ds, are given in terms of simple product and product integral inequalities. The equation can be interpreted as a model for the spread of infectious diseases (e.g., gonorrhea or any of the rhinovirus viruses) if x(t) is the proportion of infectives at time t and f(t,x(t)) is the proportion of new infectives per unit time.

  20. a Study of High Transition Temperature Superconductors: Mercury-Copper Oxide Systems

    NASA Astrophysics Data System (ADS)

    Kirven, Paul Douglas

    1995-01-01

    The Hg-based copper-oxides viz., HgBa _2Ca_{n-1}Cu_ nO _{2n+2+delta}, were discovered in 1993. A system consisting of many different, but related, compounds can be synthesized by including or substituting one or more elements in the original compound (e.g. Hg _{1-x}Pb_ x). In this thesis, the superconducting and normal state properties of several of these compounds were investigated. In the normal state electrical resistivity rho(T) is a linear function of temperature (T) and the magnetic susceptibility, X(T), is weakly paramagnetic. Many were observed to superconduct at very high temperatures. At 5 K up to 80% perfect diamagnetic X(T) was measured. The onset transition temperature (T_ c), where a specimen starts to superconduct, is observed to be as high as 135 K. Although T_ c is about 10 K higher than that of any previously known material, in many respects the properties of this new system are similar to that of other type II superconductors. Flux flow behavior and the nature of these type II superconductors was investigated via SQUID measurements and high field longitudinal magneto-resistance R(T,H) as a function of field and temperature. The study of flux motion allows one to observe Anderson-Kim type logarithimic flux creep at low temperature and field (T < 80K and B < 2T) and giant -flux flow at high temperature and field (80 < T < 130; B < 17T). Key parameters were determined. Some of which include reversibility temperature T*(H), critical field Hc, and pinning potential, Uo. Normal state properties which were also measured include the following: Curie constant, Curie-Weiss temperature (15-25 K), temperature independent susceptibility, and Sommerfeld constant (10-25 mJ/mol.Cu K^2). The values of these parameters of the Hg-based superconductors were compared to those of other superconductors. The results of this investigation are expected to yield a better understanding of this newest family of high temperature superconductors.

  1. TECA: Petascale pattern recognition for climate science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, .; Byna, Surendra; Vishwanath, Venkatram

    Climate Change is one of the most pressing challenges facing humanity in the 21st century. Climate simulations provide us with a unique opportunity to examine effects of anthropogenic emissions. Highresolution climate simulations produce “Big Data”: contemporary climate archives are ≈ 5PB in size and we expect future archives to measure on the order of Exa-Bytes. In this work, we present the successful application of TECA (Toolkit for Extreme Climate Analysis) framework, for extracting extreme weather patterns such as Tropical Cyclones, Atmospheric Rivers and Extra-Tropical Cyclones from TB-sized simulation datasets. TECA has been run at full-scale on Cray XE6 and IBMmore » BG/Q systems, and has reduced the runtime for pattern detection tasks from years to hours. TECA has been utilized to evaluate the performance of various computational models in reproducing the statistics of extreme weather events, and for characterizing the change in frequency of storm systems in the future.« less

  2. Effects of self-etching primer on shear bond strength of orthodontic brackets at different debond times.

    PubMed

    Turk, Tamer; Elekdag-Turk, Selma; Isci, Devrim

    2007-01-01

    To evaluate the effect of a self-etching primer on shear bond strengths (SBS) at the different debond times of 5, 15, 30, and 60 minutes and 24 hours. Brackets were bonded to human premolars with different etching protocols. In the control group (conventional method [CM]) teeth were etched with 37% phosphoric acid. In the study group, a self-etching primer (SEP; Transbond Plus Self Etching Primer; 3M Unitek, Monrovia, Calif) was applied as recommended by the manufacturer. Brackets were bonded with light-cure adhesive paste (Transbond XT; 3M Unitek) and light-cured for 20 seconds in both groups. The shear bond test was performed at the different debond times of 5, 15, 30 and 60 minutes and 24 hours. Lowest SBS was attained with a debond time of 5 minutes for the CM group (9.51 MPa) and the SEP group (8.97 MPa). Highest SBS was obtained with a debond time of 24 hours for the CM group (16.82 MPa) and the SEP group (19.11 MPa). Statistically significant differences between the two groups were not observed for debond times of 5, 15, 30, or 60 minutes. However, the SBS values obtained at 24 hours were significantly different (P < .001). Adequate SBS was obtained with self-etching primer during the first 60 minutes (5, 15, 30 and 60 minutes) when compared with the conventional method. It is reliable to load the bracket 5 minutes after bonding using self-etching primer (Transbond Plus) with the light-cure adhesive (Transbond XT).

  3. Optimal Control of a Brownian Storage System

    DTIC Science & Technology

    1976-09-01

    subject to the constraint that W(t) = X(t) + Y(t) - Z(t) > 0 for all t > 0 (almost surely). It is the hypothesized structure of costs and rewards that...or a bank account ) whose content evolves as the Brownian Motion X in the absence of any control In particular, X(O.) represents the initial content...however then the controller is obliged to inject material into the system so as to keep the net content positive, and he incurs a cost of k > I

  4. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  5. Supercomputers for engineering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goudreau, G.L.; Benson, D.J.; Hallquist, J.O.

    1986-07-01

    The Cray-1 and Cray X-MP/48 experience in engineering computations at the Lawrence Livermore National Laboratory is surveyed. The fully vectorized explicit DYNA and implicit NIKE finite element codes are discussed with respect to solid and structural mechanics. The main efficiencies for production analyses are currently obtained by simple CFT compiler exploitation of pipeline architecture for inner do-loop optimization. Current developmet of outer-loop multitasking is also discussed. Applications emphasis will be on 3D examples spanning earth penetrator loads analysis, target lethality assessment, and crashworthiness. The use of a vectorized large deformation shell element in both DYNA and NIKE has substantially expandedmore » 3D nonlinear capability. 25 refs., 7 figs.« less

  6. A vectorized Lanczos eigensolver for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1990-01-01

    The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.

  7. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  8. Some ethical issues regarding xenotransfusion.

    PubMed

    Roux, Françoise A; Saï, Pierre; Deschamps, Jack-Yves

    2007-05-01

    The use of porcine red blood cells has recently been proposed as a possible solution to the shortage of blood for human transfusion. The purpose of this paper is to compare some ethical issues regarding xenotransfusion (XTF) with those relating to xenotransplantation (XT) of organs, tissues and cells. Various ethical concerns and viewpoints relating to XTF are discussed. The main ethical obstacles to XT do not apply to XTF. It is much more ethically acceptable to raise pigs for regular blood collection as it doesn't damage the health of the animal. Porcine endogenous retrovirus infection, the major concern associated with XT, does not apply to XTF, since red blood cells have no DNA and have a very short lifespan. Clinical trials will be possible in humans once XTF has been demonstrated to be effective and harmless in non-human primates. Transgenesis is acceptable for pig blood donors because only a limited number of genes are involved, and these animals will never enter into the livestock gene pool or the food chain. Because the need for blood is less pressing than that for organs, tissues or cells, the use of animal blood for human transfusion is not an absolute necessity. However, it represents a real opportunity. The ability to gain access to an unlimited quantity of blood is a reasonable justification for XTF. Because its technical and ethical hurdles are less stringent, XTF could be the first large-scale clinical application of XT.

  9. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  10. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  11. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  12. Researchers Mine Information from Next-Generation Subsurface Flow Simulations

    DOE PAGES

    Gedenk, Eric D.

    2015-12-01

    A research team based at Virginia Tech University leveraged computing resources at the US Department of Energy's (DOE's) Oak Ridge National Laboratory to explore subsurface multiphase flow phenomena that can't be experimentally observed. Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility, the team took Micro-CT images of subsurface geologic systems and created two-phase flow simulations. The team's model development has implications for computational research pertaining to carbon sequestration, oil recovery, and contaminant transport.

  13. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 3A: High pressure oxidizer turbo-pump preburner pump housing stress analysis report

    NASA Technical Reports Server (NTRS)

    Shannon, Robert V., Jr.

    1989-01-01

    The model generation and structural analysis performed for the High Pressure Oxidizer Turbopump (HPOTP) preburner pump volute housing located on the main pump end of the HPOTP in the space shuttle main engine are summarized. An ANSYS finite element model of the volute housing was built and executed. A static structural analysis was performed on the Engineering Analysis and Data System (EADS) Cray-XMP supercomputer

  14. A Portable Parallel Implementation of the U.S. Navy Layered Ocean Model

    DTIC Science & Technology

    1995-01-01

    Wallcraft, PhD (I.C. 1981) Planning Systems Inc. & P. R. Moore, PhD (Camb. 1971) IC Dept. Math. DR Moore 1° Encontro de Metodos Numericos...Kendall Square, Hypercube, D R Moore 1 ° Encontro de Metodos Numericos para Equacöes de Derivadas Parciais A. J. Wallcraft IC Mathematics...chips: Chips Machine DEC Alpha CrayT3D/E SUN Sparc Fujitsu AP1000 Intel 860 Paragon D R Moore 1° Encontro de Metodos Numericos para Equacöes

  15. Research in Computational Aeroscience Applications Implemented on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Wigton, Larry

    1996-01-01

    Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

  16. Proceedings of the Scientific Conference on Obscuration and Aerosol Research Held in Aberdeen Maryland on 27-30 June 1989

    DTIC Science & Technology

    1990-08-01

    corneal structure for both normal and swollen corneas. Other problems of future interest are the understanding of the structure of scarred and dystrophied ...METHOD AND RESULTS The system of equations is solved numerically on a Cray X-MP by a finite element method with 9-node Lagrange quadrilaterals ( Becker ...Appl. Math., 42, 430. Becker , E. B., G. F. Carey, and J. T. Oden, 1981. Finite Elements: An Introduction (Vol. 1), Prentice- Hall, Englewood Cliffs, New

  17. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.

  18. Discolouration of orthodontic adhesives caused by food dyes and ultraviolet light.

    PubMed

    Faltermeier, Andreas; Rosentritt, Martin; Reicheneder, Claudia; Behr, Michael

    2008-02-01

    Enamel discolouration after debonding of orthodontic attachments could occur because of irreversible penetration of resin tags into the enamel structure. Adhesives could discolour because of food dyes or ultraviolet irradiation. The aim of this study was to investigate the colour stability of adhesives during ultraviolet irradiation and exposure to food colourants. Four different adhesives were exposed in a Suntest CPS+ ageing device to a xenon lamp to simulate natural daylight (Transbond XT, Enlight, RelyX Unicem, and Meron Plus AC). Tomato ketchup, Coca Cola, and tea were chosen as the food colourants. After 72 hours of exposure, colour measurements were performed by means of a spectrophotometer according to the Commission Internationale de l'Eclairage L*a*b* system and colour changes (DeltaE*) were computed. Statistical differences were investigated using two-way analysis of variance (ANOVA) and Friedman test. Unsatisfactory colour stability after in vitro exposure to food colourants and ultraviolet light was observed for the conventional adhesive systems, Transbond XT and Enlight. RelyX Unicem showed the least colour change and the resin-reinforced glass-ionomer cement (GIC), Meron Plus AC, the greatest colour change. The investigated adhesives seem to be susceptible to both internal and external discolouration. These in vitro findings indicate that the tested conventional adhesive systems reveal unsatisfactory colour stability which should be improved to avoid enamel discolouration.

  19. Effect of Energy Drinks on Discoloration of Silorane and Dimethacrylate-Based Composite Resins

    PubMed Central

    Ahmadizenouz, Ghazaleh; Esmaeili, Behnaz; Ahangari, Zohreh; Khafri, Soraya; Rahmani, Aghil

    2016-01-01

    Objectives: This study aimed to assess the effects of two energy drinks on color change (ΔE) of two methacrylate-based and a silorane-based composite resin after one week and one month. Materials and Methods: Thirty cubic samples were fabricated from Filtek P90, Filtek Z250 and Filtek Z350XT composite resins. All the specimens were stored in distilled water at 37°C for 24 hours. Baseline color values (L*a*b*) of each specimen were measured using a spectrophotometer according to the CIEL*a*b* color system. Ten randomly selected specimens from each composite were then immersed in the two energy drinks (Hype, Red Bull) and artificial saliva (control) for one week and one month. Color was re-assessed after each storage period and ΔE values were calculated. The data were analyzed using the Kruskal Wallis and Mann–Whitney U tests. Results: Filtek Z250 composite showed the highest ΔE irrespective of the solutions at both time points. After seven days and one month, the lowest ΔE values were observed in Filtek Z350XT and Filtek P90 composites immersed in artificial saliva, respectively. The ΔE values of Filtek Z250 and Z350XT composites induced by Red Bull and Hype energy drinks were not significantly different. Discoloration of Filtek P90 was higher in Red Bull energy drink at both time points. Conclusions: Prolonged immersion time in all three solutions increased ΔE values of all composites. However, the ΔE values were within the clinically acceptable range (<3.3) at both time points. PMID:28127318

  20. Military Off-the-Shelf: A Discussion on Combat Ship Acquisition

    DTIC Science & Technology

    2014-08-01

    Layton ...S+ + ‘C le an S he et ’ In te rio r/ e xt er io r d es ig n id en tic al N ea r- id en tic al d es ig n / m in or m od s Si m...ila rit y in d es ig n bu t w ith U ni qu e de si gn a nd to le ad sh ip a to e xt er na l s tru ct ur e , in te rn al sy

  1. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  2. Massive Social Network Analysis: Mining Twitter for Social Good

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ediger, David; Jiang, Karl; Riedy, Edward J.

    Social networks produce an enormous quantity of data. Facebook consists of over 400 million active users sharing over 5 billion pieces of information each month. Analyzing this vast quantity of unstructured data presents challenges for software and hardware. We present GraphCT, a Graph Characterization Tooklit for massive graphs representing social network data. On a 128-processor Cray XMT, GraphCT estimates the betweenness centrality of an artificially generated (R-MAT) 537 million vertex, 8.6 billion edge graph in 55 minutes. We use GraphCT to analyze public data from Twitter, a microblogging network. Twitter's message connections appear primarily tree-structured as a news dissemination system.more » Within the public data, however, are clusters of conversations. Using GraphCT, we can rank actors within these conversations and help analysts focus attention on a much smaller data subset.« less

  3. Shear Bond Strength of Three Orthodontic Bonding Systems on Enamel and Restorative Materials

    PubMed Central

    Ebeling, Jennifer; Schauseil, Michael; Stein, Steffen; Roggendorf, Matthias; Korbmacher-Steiner, Heike

    2016-01-01

    Objective. The aim of this in vitro study was to determine the shear bond strength (SBS) and adhesive remnant index (ARI) score of two self-etching no-mix adhesives (iBond™ and Scotchbond™) on different prosthetic surfaces and enamel, in comparison with the commonly used total etch system Transbond XT™. Materials and Methods. A total of 270 surfaces (1 enamel and 8 restorative surfaces, n = 30) were randomly divided into three adhesive groups. In group 1 (control) brackets were bonded with Transbond XT primer. In the experimental groups iBond adhesive (group 2) and Scotchbond Universal adhesive (group 3) were used. The SBS was measured using a Zwicki 1120™ testing machine. The ARI and SBS were compared statistically using the Kruskal–Wallis test (P ≤ 0.05). Results. Significant differences in SBS and ARI were found between the control group and experimental groups. Conclusions. Transbond XT showed the highest SBS on human enamel. Scotchbond Universal on average provides the best bonding on all other types of surface (metal, composite, and porcelain), with no need for additional primers. It might therefore be helpful for simplifying bonding in orthodontic procedures on restorative materials in patients. If metal brackets have to be bonded to a metal surface, the use of a dual-curing resin is recommended. PMID:27738633

  4. Separation of components from a scale mixture of Gaussian white noises

    NASA Astrophysics Data System (ADS)

    Vamoş, Călin; Crăciun, Maria

    2010-05-01

    The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.

  5. Application of Strep-Tactin XT for affinity purification of Twin-Strep-tagged CB2, a G protein-coupled cannabinoid receptor.

    PubMed

    Yeliseev, Alexei; Zoubak, Lioudmila; Schmidt, Thomas G M

    2017-03-01

    Human cannabinoid receptor CB 2 belongs to the class A of G protein-coupled receptor (GPCR). CB 2 is predominantly expressed in membranes of cells of immune origin and is implicated in regulation of metabolic pathways of inflammation, neurodegenerative disorders and pain sensing. High resolution structural studies of CB 2 require milligram quantities of purified, structurally intact protein. While we previously reported on the methodology for expression of the recombinant CB 2 and its stabilization in a functional state, here we describe an efficient protocol for purification of this protein using the Twin-Strep-tag/Strep-Tactin XT system. To improve the affinity of interaction of the recombinant CB 2 with the resin, the double repeat of the Strep-tag (a sequence of eight amino acids WSHPQFEK), named the Twin-Strep-tag was attached either to the N- or C-terminus of CB 2 via a short linker, and the recombinant protein was expressed in cytoplasmic membranes of E. coli as a fusion with the N-terminal maltose binding protein (MBP). The CB 2 was isolated at high purity from dilute solutions containing high concentrations of detergents, glycerol and salts, by capturing onto the Strep-Tactin XT resin, and was eluted from the resin under mild conditions upon addition of biotin. Surface plasmon resonance studies performed on the purified protein demonstrate the high affinity of interaction between the Twin-Strep-tag fused to the CB 2 and Strep-Tactin XT with an estimated Kd in the low nanomolar range. The affinity of binding did not vary significantly in response to the position of the tag at either N- or C-termini of the fusion. The binding capacity of the resin was several-fold higher for the tag located at the N-terminus of the protein as opposed to the C-terminus- or middle of the fusion. The variation in the length of the linker between the double repeats of the Strep-tag from 6 to 12 amino acid residues did not significantly affect the binding. The novel purification protocol reported here enables efficient isolation of a recombinant GPCR expressed at low titers in host cells. This procedure is suitable for preparation of milligram quantities of stable isotope-labelled receptor for high-resolution NMR studies. Published by Elsevier Inc.

  6. A multithreaded and GPU-optimized compact finite difference algorithm for turbulent mixing at high Schmidt number using petascale computing

    NASA Astrophysics Data System (ADS)

    Clay, M. P.; Yeung, P. K.; Buaria, D.; Gotoh, T.

    2017-11-01

    Turbulent mixing at high Schmidt number is a multiscale problem which places demanding requirements on direct numerical simulations to resolve fluctuations down the to Batchelor scale. We use a dual-grid, dual-scheme and dual-communicator approach where velocity and scalar fields are computed by separate groups of parallel processes, the latter using a combined compact finite difference (CCD) scheme on finer grid with a static 3-D domain decomposition free of the communication overhead of memory transposes. A high degree of scalability is achieved for a 81923 scalar field at Schmidt number 512 in turbulence with a modest inertial range, by overlapping communication with computation whenever possible. On the Cray XE6 partition of Blue Waters, use of a dedicated thread for communication combined with OpenMP locks and nested parallelism reduces CCD timings by 34% compared to an MPI baseline. The code has been further optimized for the 27-petaflops Cray XK7 machine Titan using GPUs as accelerators with the latest OpenMP 4.5 directives, giving 2.7X speedup compared to CPU-only execution at the largest problem size. Supported by NSF Grant ACI-1036170, the NCSA Blue Waters Project with subaward via UIUC, and a DOE INCITE allocation at ORNL.

  7. Approximate Solutions for Certain Optimal Stopping Problems

    DTIC Science & Technology

    1978-01-05

    one-armed bandit problem) has arisen in a number of statistical applications (Chernoff and Ray (1965);, Chernoff (±9&]), Mallik (1971)): Let X(t... Mallik (1971) and Chernoff (1972). These previous approximations were determined without the benefit of the "correction for continuity" given in (5.1...Vol. 1, 3rd edition, John Wiley and Sons, Inc., New York» 7. Mallik , A.K» (1971), "Sequential estimation of the common mean of two normal

  8. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  9. Climate Data Assimilation on a Massively Parallel Supercomputer

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512-nodes of an Intel Paragon. The preconditioned Conjugate Gradient solver achieves a sustained 18 Gflops performance. Consequently, we achieve an unprecedented 100-fold reduction in time to solution on the Intel Paragon over a single head of a Cray C90. This not only exceeds the daily performance requirement of the Data Assimilation Office at NASA's Goddard Space Flight Center, but also makes it possible to explore much larger and challenging data assimilation problems which are unthinkable on a traditional computer platform such as the Cray C90.

  10. Xenon ventilation computed tomography and the management of asthma in the elderly.

    PubMed

    Park, Heung-Woo; Jung, Jae-Woo; Kim, Kyung-Mook; Kim, Tae-Wan; Lee, So-Hee; Lee, Chang Hyun; Goo, Jin Mo; Min, Kyung-Up; Cho, Sang-Heon

    2014-04-01

    Xenon ventilation computed tomography (CT) has shown potential in assessing the regional ventilation status in subjects with asthma. The purpose of this study was to evaluate the usefulness of xenon ventilation CT in the management of asthma in the elderly. Treatment-naïve asthmatics aged 65 years or older were recruited. Before initiation of medication, spirometry with bronchodilator (BD) reversibility, questionnaires to assess the severity of symptoms including a visual analogue scale (VAS), tests to evaluate cognitive function and mood, and xenon ventilation CT were performed. Xenon gas trapping (XT) on xenon ventilation CT represents an area where inhaled xenon gas was not expired and was trapped. Symptoms and lung functions were measured again after the 12-week treatment. A total of 30 elderly asthmatics were enrolled. The severity of dyspnoea measured by the VAS showed a significant correlation with the total number of areas of XT on the xenon ventilation CT taken in the pre-BD wash-out phase (r = -0.723, P < 0.001). The total number of areas of XT significantly decreased after BD inhalation, and differences in the total number of areas of XT (between the pre- and post-BD wash-out phases) at baseline showed significant correlations with the per cent increases in forced expiratory volume in 1 s after subsequent anti-asthma treatment (r = -0.775, P < 0.001). Xenon ventilation CT may be an objective and promising tool in the measurement of dyspnoea and prediction of the treatment response in elderly asthmatics. © 2014 The Authors. Respirology © 2014 Asian Pacific Society of Respirology.

  11. Polishing mechanism of light-initiated dental composite: Geometric optics approach.

    PubMed

    Chiang, Yu-Chih; Lai, Eddie Hsiang-Hua; Kunzelmann, Karl-Heinz

    2016-12-01

    For light-initiated dental hybrid composites, reinforcing particles are much stiffer than the matrix, which makes the surface rugged after inadequate polish and favors bacterial adhesion and biofilm redevelopment. The aim of the study was to investigate the polishing mechanism via the geometric optics approach. We defined the polishing abilities of six instruments using the obtained gloss values through the geometric optics approach (micro-Tri-gloss with 20°, 60°, and 85° measurement angles). The surface texture was validated using a field emission scanning electron microscope (FE-SEM). Based on the gloss values, we sorted polishing tools into three abrasive levels, and proposed polishing sequences to test the hypothesis that similar abrasive levels would leave equivalent gloss levels on dental composites. The three proposed, tested polishing sequences included: S1, Sof-Lex XT coarse disc, Sof-Lex XT fine disc, and OccluBrush; S2, Sof-Lex XT coarse disc, Prisma Gloss polishing paste, and OccluBrush; and S3, Sof-Lex XT coarse disc, Enhance finishing cups, and OccluBrush. S1 demonstrated significantly higher surface gloss than the other procedures (p < 0.05). The surface textures (FE-SEM micrographs) correlated well with the obtained gloss values. Nominally similar abrasive abilities did not result in equivalent polish levels, indicating that the polishing tools must be evaluated and cannot be judged based on their compositions or abrasive sizes. The geometric optic approach is an efficient and nondestructive method to characterize the polished surface of dental composites. Copyright © 2015. Published by Elsevier B.V.

  12. Data Elevator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BYNA, SUNRENDRA; DONG, BIN; WU, KESHENG

    Data Elevator: Efficient Asynchronous Data Movement in Hierarchical Storage Systems Multi-layer storage subsystems, including SSD-based burst buffers and disk-based parallel file systems (PFS), are becoming part of HPC systems. However, software for this storage hierarchy is still in its infancy. Applications may have to explicitly move data among the storage layers. We propose Data Elevator for transparently and efficiently moving data between a burst buffer and a PFS. Users specify the final destination for their data, typically on PFS, Data Elevator intercepts the I/O calls, stages data on burst buffer, and then asynchronously transfers the data to their final destinationmore » in the background. This system allows extensive optimizations, such as overlapping read and write operations, choosing I/O modes, and aligning buffer boundaries. In tests with large-scale scientific applications, Data Elevator is as much as 4.2X faster than Cray DataWarp, the start-of-art software for burst buffer, and 4X faster than directly writing to PFS. The Data Elevator library uses HDF5's Virtual Object Layer (VOL) for intercepting parallel I/O calls that write data to PFS. The intercepted calls are redirected to the Data Elevator, which provides a handle to write the file in a faster and intermediate burst buffer system. Once the application finishes writing the data to the burst buffer, the Data Elevator job uses HDF5 to move the data to final destination in an asynchronous manner. Hence, using the Data Elevator library is currently useful for applications that call HDF5 for writing data files. Also, the Data Elevator depends on the HDF5 VOL functionality.« less

  13. Complex trajectories in a classical periodic potential

    NASA Astrophysics Data System (ADS)

    Anderson, Alexander G.; Bender, Carl M.

    2012-11-01

    This paper examines the complex trajectories of a classical particle in the potential V(x) = -cos (x). Almost all the trajectories describe a particle that hops from one well to another in an erratic fashion. However, it is shown analytically that there are two special classes of trajectories x(t) determined only by the energy of the particle and not by the initial position of the particle. The first class consists of periodic trajectories; that is, trajectories that return to their initial position x(0) after some real time T. The second class consists of trajectories for which there exists a real time T such that x(t + T) = x(t) ± 2π. These two classes of classical trajectories are analogous to valence and conduction bands in quantum mechanics, where the quantum particle either remains localized or else tunnels resonantly (conducts) through a crystal lattice. These two special types of trajectories are associated with sets of energies of measure 0. For other energies, it is shown that for long times the average velocity of the particle becomes a fractal-like function of energy.

  14. Equivalence of interest rate models and lattice gases.

    PubMed

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t(1),t(2))=-Cov[x(t(1)),x(t(2))]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e(-γ|x-y|)-e(-γ(x+y))). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  15. Equivalence of interest rate models and lattice gases

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t1,t2)=-Cov[x(t1),x(t2)]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e-γ|x-y|-e-γ(x+y)). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  16. Effects of cold light bleaching on the color stability of composite resins

    PubMed Central

    Cao, Liqun; Huang, Lijuan; Wu, Meisheng; Wei, Hua; Zhao, Shouliang

    2015-01-01

    To evaluate the effects of cold light bleaching on the color stability of four restorations using a thermocycling stain challenge. 160 specimens (10 mm in diameter and 2 mm thick) were fabricated from 4 composite resins (Gradia Direct-A, Z350XT, Premisa, and Précis) and divided into 4 subgroups. Color was assessed according to the CIEL*a*b* color scale at baseline, after the first cycle of bleaching, after thermocycling stain challenges, and after the second cycle of bleaching. Mean values were compared using three-way analysis of variance, and multiple comparisons of the mean values were performed using the Tukey-Kramer test. All groups showed significant color changes after stain challenge, the color change was more significant in Gradia Direct and Z350XT than in Premisa and Précis. After the second cycle of bleaching, color mostly recovered to its original values. The color stability of Gradia Direct and Z350XT was inferior to that of Premisa and Précis. The discoloration of composite resin materials can be partly removed after cold light bleaching. PMID:26309549

  17. Comparison of the BD MAX MRSA XT to the Cepheid™ Xpert® MRSA assay for the molecular detection of methicillin-resistant Staphylococcus aureus from nasal swabs.

    PubMed

    Mehta, Sanjay R; Estrada, Jasmine; Ybarra, Juan; Fierer, Joshua

    2017-04-01

    Variation in MRSA genotypes may affect the sensitivity of molecular assays to detect this organism. We compared 2 commonly used screening assays, the Cepheid™ Xpert® MRSA and the BD MAX™ MRSA XT on consecutively obtained nasal swabs from 479 subjects. Specimens giving discordant results were subjected to additional microbiologic and molecular testing. Six hundred forty-two (97.6%) of the 658 test results were concordant. Of the 16 discordant results from 12 subjects, additional results suggested that 9 (60%) of the 15 MRSA XT assays were likely correct, and 6 (40%) of the 15 Xpert® assays were likely correct. One discordant result could not be resolved. A mecA dropout and novel mec right-extremity junction (MREJ) sites led to false-positive and negative results by Xpert®. While both assays performed well, continued vigilance is needed to monitor for Staphylococcus aureus with novel MREJ sites, mecA dropouts, and mecC, leading to inaccurate results in screening assays. Published by Elsevier Inc.

  18. Climate Ocean Modeling on a Beowulf Class System

    NASA Technical Reports Server (NTRS)

    Cheng, B. N.; Chao, Y.; Wang, P.; Bondarenko, M.

    2000-01-01

    With the growing power and shrinking cost of personal computers. the availability of fast ethernet interconnections, and public domain software packages, it is now possible to combine them to build desktop parallel computers (named Beowulf or PC clusters) at a fraction of what it would cost to buy systems of comparable power front supercomputer companies. This led as to build and assemble our own sys tem. specifically for climate ocean modeling. In this article, we present our experience with such a system, discuss its network performance, and provide some performance comparison data with both HP SPP2000 and Cray T3E for an ocean Model used in present-day oceanographic research.

  19. Neutron and X-ray Tomography (NeXT) system for simultaneous, dual modality tomography.

    PubMed

    LaManna, J M; Hussey, D S; Baltic, E; Jacobson, D L

    2017-11-01

    Dual mode tomography using neutrons and X-rays offers the potential of improved estimation of the composition of a sample from the complementary interaction of the two probes with the sample. We have developed a simultaneous neutron and 90 keV X-ray tomography system that is well suited to the study of porous media systems such as fuel cells, concrete, unconventional reservoir geologies, limestones, and other geological media. We present the characteristic performance of both the neutron and X-ray modalities. We illustrate the use of the simultaneous acquisition through improved phase identification in a concrete core.

  20. Neutron and X-ray Tomography (NeXT) system for simultaneous, dual modality tomography

    NASA Astrophysics Data System (ADS)

    LaManna, J. M.; Hussey, D. S.; Baltic, E.; Jacobson, D. L.

    2017-11-01

    Dual mode tomography using neutrons and X-rays offers the potential of improved estimation of the composition of a sample from the complementary interaction of the two probes with the sample. We have developed a simultaneous neutron and 90 keV X-ray tomography system that is well suited to the study of porous media systems such as fuel cells, concrete, unconventional reservoir geologies, limestones, and other geological media. We present the characteristic performance of both the neutron and X-ray modalities. We illustrate the use of the simultaneous acquisition through improved phase identification in a concrete core.

  1. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-10-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer.

  2. A microcomputer interface for a digital audio processor-based data recording system.

    PubMed Central

    Croxton, T L; Stump, S J; Armstrong, W M

    1987-01-01

    An inexpensive interface is described that performs direct transfer of digitized data from the digital audio processor and video cassette recorder based data acquisition system designed by Bezanilla (1985, Biophys. J., 47:437-441) to an IBM PC/XT microcomputer. The FORTRAN callable software that drives this interface is capable of controlling the video cassette recorder and starting data collection immediately after recognition of a segment of previously collected data. This permits piecewise analysis of long intervals of data that would otherwise exceed the memory capability of the microcomputer. PMID:3676444

  3. Vector fields and nilpotent Lie algebras

    NASA Technical Reports Server (NTRS)

    Grayson, Matthew; Grossman, Robert

    1987-01-01

    An infinite-dimensional family of flows E is described with the property that the associated dynamical system: x(t) = E(x(t)), where x(0) is a member of the set R to the Nth power, is explicitly integrable in closed form. These flows E are of the form E = E1 + E2, where E1 and E2 are the generators of a nilpotent Lie algebra, which is either free, or satisfies some relations at a point. These flows can then be used to approximate the flows of more general types of dynamical systems.

  4. Proceedings of the Annual Conference of the Military Testing Association (27th) Held in San Diego, California on 21-25 October 1985. Volume 2

    DTIC Science & Technology

    1985-10-25

    supports. Study II was intended as a replication of Study I. Study Ill was de - signed to follow up on the unexpected outcomes of Studies I and II, In...cadre varied widely. Some responded to increased contact with more support for this study, some de - veloped a vested interest in their own cadets...usable results applicable to both the San De ,go and Washington areas. SYSTEM DESIGN The system components consisted of IBM PC AT & XT’: with specially

  5. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  6. DDN (Defence Data Network) Protocol Implementations and Vendors Guide

    DTIC Science & Technology

    1988-08-01

    Artificial Intelligence Laboratory Room NE43-723 545 Technology Square Cambridge, MA 02139 (617) 253-8843 S John Wroclawski, (JTW@AI.AJ.MIT.EDU...Massachusetts Institute of Technology Artificial Intelligence Laboratory Room NE43-743 545 Technology Square 0 Cambridge, MA 02139 (617) 253-7885 ORDERING...TCP/IP Network Software for PC-DOS Systems CPU: IBM-PC/XT/AT/compatible in conjunction with EXOS 205 Inteligent Ethernet Controller for PCbus 0/s

  7. The utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  8. Valve-in-valve using an Edwards Sapien XT into a JenaValve in a patient with a low originating left coronary artery and a heavily calcified aorta.

    PubMed

    Fujita, Buntaro; Scholtz, Smita; Ensminger, Stephan

    2016-04-01

    Coronary obstruction during transcatheter aortic valve implantation is a potentially life-threatening complication. Most of the widely used transcatheter heart valves require a certain distance between the basal aortic annular plane and the origins of the coronary arteries. We report the case of a successful valve-in-valve procedure with an Edwards SAPIEN XT valve into a JenaValve as a bail-out procedure in a patient with a low originating left coronary artery and a heavily calcified aorta. © 2015 Wiley Periodicals, Inc.

  9. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  10. Decomposability of P-Cylindrical Martingales.

    DTIC Science & Technology

    1982-10-01

    5 &P(s)f olI(Xn-Xm) f 1 P dii(f) By using the Banach- Steinhaus theorem we observe that the right side of the last inequality converges to zero as n,m...Yt Ip < ’%P(S) f 0oIXt f’-Xt f’ iP dvi(f’) m n P m n By the assumption (Xtfl)tcT converges in L p for each f’ c F’ and by the Banach- Steinhaus

  11. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2012-05-01

    UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left

  12. Networking and Information Technology Research and Development. Advanced Foundations for American Innovation. Supplement to the President’s FY 2004 Budget

    DTIC Science & Technology

    2003-09-01

    sensors – now generating more empirical data annually than existed in the field of astronomy before 1980 – and the ability of researchers to make use of it...9701 cray@hpcmo.hpc.mil David W. Hislop , Ph.D. Program Manager, Software and Knowledge Based Systems U.S. Army Research Office P.O. Box 12211 Research...Triangle Park, NC 27709 (919) 549-4255 FAX: (919) 549-4354 hislop @aro-emh1.army.mil Rodger Johnson Program Manager, Defense Research and Engineering

  13. A Block-LU Update for Large-Scale Linear Programming

    DTIC Science & Technology

    1990-01-01

    linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and

  14. Comparison of Self-Etch Primers with Conventional Acid Etching System on Orthodontic Brackets

    PubMed Central

    Zope, Amit; Zope-Khalekar, Yogita; Chitko, Shrikant S.; Kerudi, Veerendra V.; Patil, Harshal Ashok; Jaltare, Pratik; Dolas, Siddhesh G

    2016-01-01

    Introduction The self-etching primer system consists of etchant and primer dispersed in a single unit. The etching and priming are merged as a single step leading to fewer stages in bonding procedure and reduction in the number of steps that also reduces the chance of introduction of error, resulting in saving time for the clinician. It also results in smaller extent of enamel decalcification. Aim To compare the Shear Bond Strength (SBS) of orthodontic bracket bonded with Self-Etch Primers (SEP) and conventional acid etching system and to study the surface appearance of teeth after debonding; etching with conventional acid etch and self-etch priming, using stereomicroscope. Materials and Methods Five Groups (n=20) were created randomly from a total of 100 extracted premolars. In a control Group A, etching of enamel was done with 37% phosphoric acid and bonding of stainless steel brackets with Transbond XT (3M Unitek, Monrovia, California). Enamel conditioning in left over four Groups was done with self-etching primers and adhesives as follows: Group B-Transbond Plus (3M Unitek), Group C Xeno V+ (Dentsply), Group D-G-Bond (GC), Group E-One-Coat (Coltene). The Adhesive Remnant Index (ARI) score was also evaluated. Additionally, the surface roughness using profilometer were observed. Results Mean SBS of Group A was 18.26±7.5MPa, Group B was 10.93±4.02MPa, Group C was 6.88±2.91MPa while of Group D was 7.78±4.13MPa and Group E was 10.39±5.22MPa respectively. In conventional group ARI scores shows that over half of the adhesive was remaining on the surface of tooth (score 1 to 3). In self-etching primer groups ARI scores show that there was no or minor amount of adhesive remaining on the surface of tooth (score 4 and 5). SEP produces a lesser surface roughness on the enamel than conventional etching. However, statistical analysis shows significant correlation (p<0.001) of bond strength with surface roughness of enamel. Conclusion All groups might show clinically useful SBS values and Transbond XT can be successfully used for bracket bonding after enamel conditioning with any of the SEPs tested. The SEPs used in Groups C (Xeno V+) and D (G-Bond) have significantly lowered SBS. Although, the values might still be clinically acceptable. PMID:28208997

  15. Comparing dynamic hyperinflation and associated dyspnea induced by metronome-paced tachypnea versus incremental exercise.

    PubMed

    Calligaro, Gregory L; Raine, Richard I; Bateman, Mary E; Bateman, Eric D; Cooper, Christopher B

    2014-02-01

    Dynamic hyperinflation (DH) during exercise is associated with both dyspnea and exercise limitation in COPD. Metronome-paced tachypnoea (MPT) is a simple alternative for studying DH. We compared MPT with exercise testing (XT) as methods of provoking DH, and assessed their relationship with dyspnea. We studied 24 patients with moderate COPD (FEV1 59 ± 9% predicted) after inhalation of ipratropium/salbutamol combination or placebo in a double-blind, crossover design. Inspiratory capacity (IC) was measured at baseline and after 30 seconds of MPT with breathing frequencies (fR) of 20, 30 and 40 breaths/min and metronome-defined I:E ratios of 1:1 and 1:2, in random sequence, followed by incremental cycle ergometry with interval determinations of IC. DH was defined as a decline in IC from baseline (∆IC) for both methods. Dyspnea was assessed using a Borg CR-10 scale. ∆IC during MPT was greater with higher fR and I:E ratio of 1:1 versus 1:2, and less when patients were treated with bronchodilator rather than placebo (P = 0.032). DH occurred during 19 (40%) XTs, and during 35 (73%) tests using MPT. Eleven of 18 (61%) non-congruent XTs (where DH occurred on MPT but not XT) terminated before fR of 40 breaths/min was reached. Although greater during XT, the intensity of dyspnea bore no relationship to DH during either MPT and XT. MPT at 40 breaths/min and I:E of 1:1 elicits the greatest ∆IC, and is a more sensitive method for demonstrating DH. The relationship between DH and dyspnea is complex and not determined by DH alone.

  16. Activity-dependent branching ratios in stocks, solar x-ray flux, and the Bak-Tang-Wiesenfeld sandpile model

    NASA Astrophysics Data System (ADS)

    Martin, Elliot; Shreim, Amer; Paczuski, Maya

    2010-01-01

    We define an activity-dependent branching ratio that allows comparison of different time series Xt . The branching ratio bx is defined as bx=E[ξx/x] . The random variable ξx is the value of the next signal given that the previous one is equal to x , so ξx={Xt+1∣Xt=x} . If bx>1 , the process is on average supercritical when the signal is equal to x , while if bx<1 , it is subcritical. For stock prices we find bx=1 within statistical uncertainty, for all x , consistent with an “efficient market hypothesis.” For stock volumes, solar x-ray flux intensities, and the Bak-Tang-Wiesenfeld (BTW) sandpile model, bx is supercritical for small values of activity and subcritical for the largest ones, indicating a tendency to return to a typical value. For stock volumes this tendency has an approximate power-law behavior. For solar x-ray flux and the BTW model, there is a broad regime of activity where bx≃1 , which we interpret as an indicator of critical behavior. This is true despite different underlying probability distributions for Xt and for ξx . For the BTW model the distribution of ξx is Gaussian, for x sufficiently larger than 1, and its variance grows linearly with x . Hence, the activity in the BTW model obeys a central limit theorem when sampling over past histories. The broad region of activity where bx is close to one disappears once bulk dissipation is introduced in the BTW model—supporting our hypothesis that it is an indicator of criticality.

  17. Effect of Quaternary Ammonium Salt on Shear Bond Strength of Orthodontic Brackets to Enamel

    PubMed Central

    Ghadirian, Hannaneh; Geramy, Allahyar; Najafi, Farhood; Heidari, Soolmaz

    2017-01-01

    Objectives: This study sought to assess the effect of quaternary ammonium salt (QAS) on shear bond strength of orthodontic brackets to enamel. Materials and Methods: In this in vitro experimental study, 0, 10, 20 and 30% concentrations of QAS were added to Transbond XT primer. Brackets were bonded to 60 premolar teeth using the afore-mentioned adhesive mixtures, and the shear bond strength of the four groups (n=15) was measured using a universal testing machine. After debonding, the adhesive remnant index (ARI) score was determined under a stereomicroscope. Data were analyzed using one-way ANOVA. Results: The mean and standard deviation of shear bond strength of the control and 10%, 20% and 30% groups were 23.54±6.31, 21.81±2.82, 20.83±8.35 and 22.91±5.66 MPa, respectively. No significant difference was noted in shear bond strength of the groups (P=0.83). Study groups were not different in terms of ARI scores (P=0.80). Conclusions: The results showed that addition of QAS to Transbond XT primer had no adverse effect on shear bond strength of orthodontic brackets. PMID:29167688

  18. Electron stimulated desorption of anions from native and brominated single stranded oligonucleotide trimers

    PubMed Central

    Polska, Katarzyna; Rak, Janusz; Bass, Andrew D.; Cloutier, Pierre; Sanche, Léon

    2013-01-01

    We measured the low energy electron stimulated desorption (ESD) of anions from thin films of native (TXT) and bromine monosubstituted (TBrXT) oligonucleotide trimers deposited on a gold surface (T = thymidine, X = T, deoxycytidine (C), deoxyadenosine (A) or deoxyguanosine (G), Br = bromine). The desorption of H−, CH3−/NH−, O−/NH2−, OH−, CN−, and Br− was induced by 0 to 20 eV electrons. Dissociative electron attachment, below 12 eV, and dipolar dissociation, above 12 eV, are responsible for the formation of these anions. The comparison of the results obtained for the native and brominated trimers suggests that the main pathways of TBrXT degradation correspond to the release of the hydride and bromide anions. Significantly, the presence of bromine in oligonucleotide trimers blocks the electron-induced degradation of nuclobases as evidenced by a dramatic decrease in CN− desorption. An increase in the yields of OH− is also observed. The debromination yield of particular oligonucleotides diminishes in the following order: BrdU > BrdA > BrdG > BrdC. Based on these results, 5-bromo-2′-deoxyuridine appears to be the best radiosensitizer among the studied bromonucleosides. PMID:22360262

  19. Electron stimulated desorption of anions from native and brominated single stranded oligonucleotide trimers.

    PubMed

    Polska, Katarzyna; Rak, Janusz; Bass, Andrew D; Cloutier, Pierre; Sanche, Léon

    2012-02-21

    We measured the low energy electron stimulated desorption (ESD) of anions from thin films of native (TXT) and bromine monosubstituted (TBrXT) oligonucleotide trimers deposited on a gold surface (T = thymidine, X = T, deoxycytidine (C), deoxyadenosine (A) or deoxyguanosine (G), Br = bromine). The desorption of H(-), CH(3)(-)/NH(-), O(-)/NH(2)(-), OH(-), CN(-), and Br(-) was induced by 0 to 20 eV electrons. Dissociative electron attachment, below 12 eV, and dipolar dissociation, above 12 eV, are responsible for the formation of these anions. The comparison of the results obtained for the native and brominated trimers suggests that the main pathways of TBrXT degradation correspond to the release of the hydride and bromide anions. Significantly, the presence of bromine in oligonucleotide trimers blocks the electron-induced degradation of nuclobases as evidenced by a dramatic decrease in CN(-) desorption. An increase in the yields of OH(-) is also observed. The debromination yield of particular oligonucleotides diminishes in the following order: BrdU > BrdA > BrdG > BrdC. Based on these results, 5-bromo-2(')-deoxyuridine appears to be the best radiosensitizer among the studied bromonucleosides. © 2012 American Institute of Physics

  20. Electron stimulated desorption of anions from native and brominated single stranded oligonucleotide trimers

    NASA Astrophysics Data System (ADS)

    Polska, Katarzyna; Rak, Janusz; Bass, Andrew D.; Cloutier, Pierre; Sanche, Léon

    2012-02-01

    We measured the low energy electron stimulated desorption (ESD) of anions from thin films of native (TXT) and bromine monosubstituted (TBrXT) oligonucleotide trimers deposited on a gold surface (T = thymidine, X = T, deoxycytidine (C), deoxyadenosine (A) or deoxyguanosine (G), Br = bromine). The desorption of H-, CH3-/NH-, O-/NH2-, OH-, CN-, and Br- was induced by 0 to 20 eV electrons. Dissociative electron attachment, below 12 eV, and dipolar dissociation, above 12 eV, are responsible for the formation of these anions. The comparison of the results obtained for the native and brominated trimers suggests that the main pathways of TBrXT degradation correspond to the release of the hydride and bromide anions. Significantly, the presence of bromine in oligonucleotide trimers blocks the electron-induced degradation of nuclobases as evidenced by a dramatic decrease in CN- desorption. An increase in the yields of OH- is also observed. The debromination yield of particular oligonucleotides diminishes in the following order: BrdU > BrdA > BrdG > BrdC. Based on these results, 5-bromo-2'-deoxyuridine appears to be the best radiosensitizer among the studied bromonucleosides.

  1. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  2. A transient FETI methodology for large-scale parallel implicit computations in structural mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier

    1992-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.

  3. Color stability of nanocomposites polished with one-step systems.

    PubMed

    Ergücü, Zeynep; Türkün, L Sebnem; Aladag, Akin

    2008-01-01

    This study compared the color changes of five novel resin composites polished with two one-step polishing systems when exposed to coffee solution. The resin composites tested were Filtek Supreme XT, Grandio, CeramX, Premise and Tetric EvoCeram. A total of 150 discs (30/resin composites, 10 x 2 mm) were fabricated. Ten specimens/resin composites cured under Mylar strips served as the control. The other samples were polished with PoGo and OptraPol discs for 30 seconds using a slow speed handpiece and immersed in coffee (Nescafé) for seven days. Color measurements were made with Vita Easyshade at baseline and after one and seven days. Repeated Measures ANOVA and Bonferroni tests were used for statistical analyses (p< or =0.05). The differences between the mean DeltaE* values for the resin composites polished with two different one-step systems were statistically significant (p<0.05). After one week, all materials exhibited significant color changes compared to baseline. All Mylar finished specimens showed the most intense staining (p<0.05). There were no significant differences between the OptraPol and PoGo polished groups. Mylar-finished specimens of CeramX, Tetric EvoCeram, Premise and Filtek Supreme XT presented the greatest staining (p<0.05). For Grandio, there were no significant differences between the Mylar and PoGo groups, while the most stain resistant surfaces were attained with OptraPol. Removing the outermost resin layer by polishing procedures is essential to achieving a stain resistant, more esthetically stable surface. One-step polishing systems can be used successfully for polishing nanocomposites.

  4. Time-Reversal Based Range Extension Technique for Ultra-Wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2008-10-16

    signal from the signal generator is also used to synchronize DSO to record the data of the received signal. The tapped -delay-line model of CIR will...between each filter tap . The output y(t) — h(t) *x(t) is then uniformly sampled with sampling period Ts. 1 ’s follows the relation Ta/Th — q, where q... eft ) ProbtagPake ^ p(l> ’HO PriHretuHg Figure 5.5: An equivalent block diagram of channel estimation The success of recovery relies on the

  5. Advanced Computational Techniques in Regional Wave Studies

    DTIC Science & Technology

    1990-01-03

    UiNCL.ASSIriEDIUNLIMITED C SAME AS RPT. C DTIC USERS CUNCLASSIFIED ; a 𔃾AM OF RE.;PONSIBL- E INOIVIDIJAL 22D. TELEPHCNE NUMBER 22c. OFFICE SYMBOL...this system is right We define the components of the time dependent force handed). Then, e ,, e ., and e , are the unit vectors moment tensor as towards...are constants representing the components of the 1 , ,( ,, - second order seismic moment tensor M, usually termed , M,- "(x,/,,t ,( E ,’ the moment tensor

  6. Time Delay Estimation

    DTIC Science & Technology

    1976-04-09

    of the signal and noise remain HH ***^-^*--~ 53 h, to r(Mc) h2(r» r(we) Figure 3-2 Sy&toetric Impulse Response for Two FIR Linear Phase...Inputs x,y and Outputs x.j. , 15 2-2 Linear System with Impulse Response h("r) 23 2-3 Model of Error Resulting from Linearly Filtering x(t) to...Corrupted with Additive Noise 42 2-6 Model of Directional Signal Corrupted with Additive Noise and Processed .... 45 2-7 Source Driving Two

  7. Adaptive Metropolis Sampling with Product Distributions

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Lee, Chiu Fan

    2005-01-01

    The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.

  8. Electronic structures and population dynamics of excited states of xanthione and its derivatives

    NASA Astrophysics Data System (ADS)

    Fedunov, Roman G.; Rogozina, Marina V.; Khokhlova, Svetlana S.; Ivanov, Anatoly I.; Tikhomirov, Sergei A.; Bondarev, Stanislav L.; Raichenok, Tamara F.; Buganov, Oleg V.; Olkhovik, Vyacheslav K.; Vasilevskii, Dmitrii A.

    2017-09-01

    A new compound, 1,3-dimethoxy xanthione (DXT), has been synthesized and its absorption (stationary and transient) and luminescence spectra have been measured in n-hexane and compared with xanthione (XT) spectra. The pronounced broadening of xanthione vibronic absorption band related to the electronic transition to the second singlet excited state has been observed. Distinctions between the spectra of xanthione and its methoxy derivatives are discussed. Quantum chemical calculations of these compounds in the ground and excited electronic states have been accomplished to clarify the nature of electronic spectra changes due to modification of xanthione by methoxy groups. Appearance of a new absorption band of DXT caused by symmetry changes has been discussed. Calculations of the second excited state structure of xanthione and its methoxy derivatives confirm noticeable charge transfer (about 0.1 of the charge of an electron) from the methoxy group to thiocarbonyl group. Fitting of the transient spectra of XT and DXT has been fulfilled and the time constants of internal conversion S2 →S1 and intersystem crossing S1 →T1 have been determined. A considerable difference between the time constants of internal conversion S2 →S1 in XT and DXT is uncovered.

  9. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1993-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.

  10. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  11. Statistical Analysis in Dental Research Papers.

    DTIC Science & Technology

    1983-08-08

    AD A136, 019 STATISTICAL ANALYSS IN DENTAL RESEARCH PAPERS(Ul ARMY I INS OF DENTAL NESEARCH WASHINGTON DC L LORTON 0R AUG983 UNCL ASS FED F/S 6/5 IEE...BEFORE COSTL’,..G FORM 2. GOVT ACCESSION NO 3. RECIPIENTS CATALOG NUbER d Ste S. TYPE OF REPORT A PERIOD COVERED ,cistical Analysis in Dental Research ...Papers Submission of papaer Jan- Aue 1983 X!t AUTHOR(&) ". COTACO.RATN Lewis Lorton 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT

  12. Helicopter Maneuverability and Agility Design Sensitivity and Air Combat Maneuver Data Correlation Study

    DTIC Science & Technology

    1991-10-01

    o n tr o l sy st em St ic ks , m ix in g, S A S , se rv os L an d in g I n te rf ac e - V er ti ca l lo ad s - In pl an e lo ad s E xt...ROUTINE ALPNPlt ,INPUT VARIABLE QFiQWrM »OUTPUT VARIABLE OP1LO ;LOW ANGLE BAP NABE EXP -30.0,30.0,5.0 jLOWER LIBIT. UPPER LIMIT, DELTA ; LOK ANGLE BAP

  13. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  14. RAID/C90 Technology Integration

    NASA Technical Reports Server (NTRS)

    Ciotti, Bob; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In March 1993, NAS was the first to connect a Maximum Strategy RAID disk to the C90 using standard Cray provided software. This paper discusses the problems encountered, lessons learned, and performance achieved.

  15. Baseline Demographics, Safety, and Patient Acceptance of an Insertable Cardiac Monitor for Atrial Fibrillation Screening: The REVEAL-AF Study.

    PubMed

    Conti, Sergio; Reiffel, James A; Gersh, Bernard J; Kowey, Peter R; Wachter, Rolf; Halperin, Jonathan L; Kaplon, Rachelle E; Pouliot, Erika; Verma, Atul

    2017-01-01

    Given the high prevalence and risk of stroke associated with atrial fibrillation (AF), detection strategies have important public health implications. The ongoing prospective, single-arm, open-label, multicenter REVEAL AF trial is evaluating the incidence of previously undetected AF using an insertable cardiac monitor (ICM) in patients without prior AF or device implantation, but who could be at risk for AF due to their demographic characteristics, +/- non-specific but compatible symptoms. Enrollment required an elevated AF risk profile defined as CHADS2≥3 or CHADS 2 =2 plus one or more of the following: coronary artery disease, renal impairment, sleep apnea or chronic obstructive pulmonary disease. Exclusions included stroke or transient ischemic attack occurring in the previous year. Of 450 subjects screened, 399 underwent a device insertion attempt, and 395 were included in the final analysis (Reveal XT: n=122; Reveal LINQ: n=273; excluded: n=4). Participants were primarily identified by demographic characteristics and the presence of nonspecific symptoms, but without prior documentation of "overt" AF. The most common symptoms were palpitations (51%), dizziness/lightheadedness/pre-syncope (36%), and shortness of breath (36%). Over 100 subjects were enrolled in each pre-defined CHADS2 subgroup (2, 3 and ≥4). AF risk factors not included in the CHADS2 score were well represented (prevalence≥15%). Procedure and/or device related serious adverse events were low, with the miniaturized Reveal LINQ ICM having a more favorable safety profile than the predicate Reveal XT (all: n=13 [3.3%]; LINQ: n=6 [2.2%]; XT: n=7 [5.7%]). These data demonstrate that REVEAL AF was successful in enrolling its target population, high risk patients were willing to undergo ICM monitoring for AF screening, and ICM use in this group is becoming increasingly safe with advancements in technology. A clinically meaningful incidence of device detected AF in this study will inform clinical decisions regarding ICM use for AF screening in patients at risk.

  16. HARE: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mckie, Jim

    2012-01-09

    This report documents the results of work done over a 6 year period under the FAST-OS programs. The first effort was called Right-Weight Kernels, (RWK) and was concerned with improving measurements of OS noise so it could be treated quantitatively; and evaluating the use of two operating systems, Linux and Plan 9, on HPC systems and determining how these operating systems needed to be extended or changed for HPC, while still retaining their general-purpose nature. The second program, HARE, explored the creation of alternative runtime models, building on RWK. All of the HARE work was done on Plan 9. Themore » HARE researchers were mindful of the very good Linux and LWK work being done at other labs and saw no need to recreate it. Even given this limited funding, the two efforts had outsized impact: _ Helped Cray decide to use Linux, instead of a custom kernel, and provided the tools needed to make Linux perform well _ Created a successor operating system to Plan 9, NIX, which has been taken in by Bell Labs for further development _ Created a standard system measurement tool, Fixed Time Quantum or FTQ, which is widely used for measuring operating systems impact on applications _ Spurred the use of the 9p protocol in several organizations, including IBM _ Built software in use at many companies, including IBM, Cray, and Google _ Spurred the creation of alternative runtimes for use on HPC systems _ Demonstrated that, with proper modifications, a general purpose operating systems can provide communications up to 3 times as effective as user-level libraries Open source was a key part of this work. The code developed for this project is in wide use and available at many places. The core Blue Gene code is available at https://bitbucket.org/ericvh/hare. We describe details of these impacts in the following sections. The rest of this report is organized as follows: First, we describe commercial impact; next, we describe the FTQ benchmark and its impact in more detail; operating systems and runtime research follows; we discuss infrastructure software; and close with a description of the new NIX operating system, future work, and conclusions.« less

  17. Use of Continuous Integration Tools for Application Performance Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B

    High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less

  18. Evaluation of the antibacterial activity of a conventional orthodontic composite containing silver/hydroxyapatite nanoparticles.

    PubMed

    Sodagar, Ahmad; Akhavan, Azam; Hashemi, Ehsan; Arab, Sepideh; Pourhajibagher, Maryam; Sodagar, Kosar; Kharrazifard, Mohammad Javad; Bahador, Abbas

    2016-12-01

    One of the most important complications of fixed orthodontic treatment is the formation of white spots which are initial carious lesions. Addition of antimicrobial agents into orthodontic adhesives might be a wise solution for prevention of white spot formation. The aim of this study was to evaluate the antibacterial properties of a conventional orthodontic adhesive containing three different concentrations of silver/hydroxyapatite nanoparticles. One hundred and sixty-two Transbond XT composite discs containing 0, 1, 5, and 10 % silver/hydroxyapatite nanoparticles were prepared and sterilized. Antibacterial properties of these composite groups against Streptococcus mutans, Lactobacillus acidophilus, and Streptococcus sanguinis were investigated using three different antimicrobial tests. Disk agar diffusion test was performed to assess the diffusion of antibacterial agent on brain heart infusion agar plate by measuring bacterial growth inhibition zones. Biofilm inhibition test showed the antibacterial capacity of composite discs against resistant bacterial biofilms. Antimicrobial activity of eluted components from composite discs was investigated by comparing the viable counts of bacteria after 3, 15, and 30 days. Composite discs containing 5 and 10 % silver/hydroxyapatite nanoparticles were capable of producing growth inhibition zones for all bacterial types. Results of biofilm inhibition test showed that all of the study groups reduced viable bacterial count in comparison to the control group. Antimicrobial activity of eluted components from composite discs was immensely diverse based on the bacterial type and the concentration of nanoparticles. Transbond XT composite discs containing 5 and 10 % silver/hydroxyapatite nanoparticles produce bacterial growth inhibition zones and show antibacterial properties against biofilms.

  19. Computer system for scanning tunneling microscope automation

    NASA Astrophysics Data System (ADS)

    Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.

    1987-03-01

    A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning.

  1. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  2. London penetration depth measurements in Ba (Fe 1-xT x) 2As 2(T=Co,Ni,Ru,Rh,Pd,Pt,Co+Cu) superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, Ryan T.

    2011-01-01

    The London penetration depth has been measured in various doping levels of single crystals of Ba(Fe 1-xT x) 2As 2 (T=Co,Ni,Ru,Rh,Pd,Pt,Co+Cu) superconductors by utilizing a tunnel diode resonator (TDR) apparatus. All in-plane penetration depth measurements exhibit a power law temperature dependence of the form Δλ ab(T) = CT n, indicating the existence of low-temperature, normal state quasiparticles all the way down to the lowest measured temperature, which was typically 500 mK. Several different doping concentrations from the Ba(Fe 1-xT x) 2As 2 (T=Co,Ni) systems have been measured and the doping dependence of the power law exponent, n, is compared tomore » results from measurements of thermal conductivity and specific heat. In addition, a novel method has been developed to allow for the measurement of the zero temperature value of the in-plane penetration depth, λ ab(0), by using TDR frequency shifts. By using this technique, the doping dependence of λ ab(0) has been measured in the Ba(Fe 1-xCo x) 2As 2 series, which has allowed also for the construction of the doping-dependent superfluid phase stiffness, ρ s(T) = [λ(0)/λ(T)] 2. By studying the effects of disorder on these superconductors using heavy ion irradiation, it has been determined that the observed power law temperature dependence likely arises from pair-breaking impurity scattering contributions, which is consistent with the proposed s±-wave symmetry of the superconducting gap in the dirty scattering limit. This hypothesis is supported by the measurement of an exponential temperature dependence of the penetration depth in the intrinsically clean LiFeAs, indicative of a nodeless superconducting gap.« less

  3. Subcompartment localization of the side chain xyloglucan-synthesizing enzymes within Golgi stacks of tobacco suspension-cultured cells.

    PubMed

    Chevalier, Laurence; Bernard, Sophie; Ramdani, Yasmina; Lamour, Romain; Bardor, Muriel; Lerouge, Patrice; Follet-Gueye, Marie-Laure; Driouich, Azeddine

    2010-12-01

    Xyloglucan is the dominant hemicellulosic polysaccharide of the primary cell wall of dicotyledonous plants that plays a key role in plant development. It is well established that xyloglucan is assembled within Golgi stacks and transported in Golgi-derived vesicles to the cell wall. It is also known that the biosynthesis of xyloglucan requires the action of glycosyltransferases including α-1,6-xylosyltransferase, β-1,2-galactosyltransferase and α-1,2-fucosyltransferase activities responsible for the addition of xylose, galactose and fucose residues to the side chains. There is, however, a lack of knowledge on how these enzymes are distributed within subcompartments of Golgi stacks. We have undertaken a study aiming at mapping these glycosyltransferases within Golgi stacks using immunogold-electron microscopy. To this end, we generated transgenic lines of tobacco (Nicotiana tabacum) BY-2 suspension-cultured cells expressing either the α-1,6-xylosyltransferase, AtXT1, the β-1,2-galactosyltransferase, AtMUR3, or the α-1,2-fucosyltransferase AtFUT1 of Arabidopsis thaliana fused to green-fluorescent protein (GFP). Localization of the fusion proteins within the endomembrane system was assessed using confocal microscopy. Additionally, tobacco cells were high pressure-frozen/freeze-substituted and subjected to quantitative immunogold labelling using anti-GFP antibodies to determine the localization patterns of the enzymes within subtypes of Golgi cisternae. The data demonstrate that: (i) all fusion proteins, AtXT1-GFP, AtMUR3-GFP and AtFUT1-GFP are specifically targeted to the Golgi apparatus; and (ii) AtXT1-GFP is mainly located in the cis and medial cisternae, AtMUR3-GFP is predominantly associated with medial cisternae and AtFUT1-GFP mostly detected over trans cisternae suggesting that initiation of xyloglucan side chains occurs in early Golgi compartments in tobacco cells. The Plant Journal © 2010 Blackwell Publishing Ltd. No claim to original US government works.

  4. A microprocessor-based automation test system for the experiment of the multi-stage compressor

    NASA Astrophysics Data System (ADS)

    Zhang, Huisheng; Lin, Chongping

    1991-08-01

    An automation test system that is controlled by the microprocessor and used in the multistage compressor experiment is described. Based on the analysis of the compressor experiment performances, a complete hardware system structure is set up. It is composed of a IBM PC/XT computer, a large scale sampled data system, the moving machine with three directions, the scanners, the digital instrumentation and some output devices. A program structure of real-time software system is described. The testing results show that this test system can take the measure of many parameter magnitudes in the blade row places and on a boundary layer in different states. The automatic extent and the accuracy of experiment is increased and the experimental cost is reduced.

  5. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences: Some lessons learned

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1994-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw growth in every area. Within 26 months, data under UniTree control grew from nil to over 12 terabytes, nearly all of it stored on robotically mounted tape. HiPPI/UltraNet was added to enhance connectivity, and later HiPPI/TCP was added as well. Disks and robotic tape silos were added to those already under UniTree's control, and 18-track tapes were upgraded to 36-track. The primary data source for UniTree, the facility's Cray Y-MP/4-128, first doubled its processing power and then was replaced altogether by a C98/6-256 with nearly two-and-a-half times the Y-MP's combined peak gigaflops. The Convex/UniTree software was upgraded from version 1.5 to 1.7.5, and then to 1.7.6. Finally, the server itself, a Convex C3240, was upgraded to a C3830 with a second I/O bay, doubling the C3240's memory and capacity for I/O. This paper describes insights gained and reinforced with the burgeoning demands on the UniTree storage system and the significant increases in performance gained from the many upgrades.

  6. Evaluation of a Constrained Facet Analysis Efficiency Model for Identifying the Efficiency of Medical Treatment Facilities in the Army Medical Department

    DTIC Science & Technology

    1990-07-31

    examples on their use is available with the PASS User Documentation Manual. 2 The data structure of PASS requires a three- lvel organizational...files, and missing control variables. A specific problem noted involved the absence of 8087 mathematical co-processor on the target IBM-XT 21 machine...System, required an operational understanding of the advanced mathematical technique used in the model. Problems with the original release of the PASS

  7. Forecasting Outcomes of Multilateral Negotiations: Computer Programs. Volume 2. Guide for Programmers.

    DTIC Science & Technology

    1976-10-01

    In -Tor DISTRIBUTION LIST: .... National Defense University .. 1 Armed Forces Staff College Department of Defense Computer Institute - . U.S...AND !1I~OR UICTIONARy nO 500 Jmlo-NMAJ RfrAI(LIN, 8) MAJUR , (TITLE(J),.Jz1,6),NMIN WRITE(LOUTo9) MAJOR, (TITLE(J)PJ:1*6) DO0 450 IIzlsJMIN RE A1)(L I N...5XtI4,2Xp6A8) 21 FORfMATC1H1,* LI1ST OF L :TTEREU THEHLS IN NUMUI~CAL OPD)ER*I/) 22 FORMATCI~EAl,6At8). 23 F0RMAT(5X,I5,Als2H**,6A8). 24 FORMIAT (1X,’ MAJUR

  8. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  9. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  10. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  11. 15. BUILDING 239. SECTIONS AND DETAILS OF DRYING ROOMS AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. BUILDING 239. SECTIONS AND DETAILS OF DRYING ROOMS AND MIXING ROOMS. March 6, 1941. - Frankford Arsenal, Building Nos. 239-239A, Southeast corner of Clay Street & Cray Road, Philadelphia, Philadelphia County, PA

  12. Late evolution of very low mass X-ray binaries sustained by radiation from their primaries

    NASA Technical Reports Server (NTRS)

    Ruderman, M.; Shaham, J.; Tavani, M.; Eichler, D.

    1989-01-01

    The accretion-powered radiation from the X-ray pulsar system Her X-1 (McCray et al. 1982) is studied. The changes in the soft X-ray and gamma-ray flux and in the accompanying electron-positron wind are discussed. These are believed to be associated with the inward movement of the inner edge of the accretion disk corresponding to the boundary with the neutron star's corotating magnetosphere (Alfven radius). LMXB evolution which is self-sustained by secondary winds intercepting the radiation emitted near an LMXB neutron star is investigated as well.

  13. A survey of outcome of adjustable suture as first operation in patients with strabismus

    PubMed Central

    Razmjoo, Hasan; Attarzadeh, Hosein; Karbasi, Najmeh; Najarzadegan, Mohammad Reza; Salam, Hasan; Jamshidi, Aliraza

    2014-01-01

    Background: Adjustable suture used for years to improve the outcome of strabismus surgery. We surveyed outcome of our patients with strabismus who underwent adjustable suture. Materials and Methods: This retrospective study was performed at Ophthalmology Centre of Feiz Hospital in Isfahan on 95 participants that candidate for adjustable suture strabismus surgery. Patients were divided into three age groups: Under 10 years, 10-19 years, and 20 years and over. Outcome of adjustable suture surgery consequence of residual postoperative deviation was divided into four groups: Excellent, good, acceptable, and unacceptable. Results: Out of 95 patients studied, 51 (53.7%) were males and 44 (46.3%) were females. The mean of deviation angles were 53.8 ± 17.9 PD (Prism dioptres) in alt XT, 44.5 ± 12 PD in alt ET and 52 ± 13.5 PD in const ET, 47.1 ± 13.1PD in cons XT, respectively. There was no significant difference between the groups (P = 0.051). Results of surgery were in 38 patients (40%) excellent, in 31 patients (32.6%) good, in 19 patients (20%) acceptable, and in 7 patients (7.4%) unacceptable. Seven (7.4%) patients required reoperation. Conclusions: In the present study, the frequency of re-operation was much lower than other similar studies (7.4% vs. 30-50%). This suggests that the adjustable technique that used in our study can be associated with lower reoperation than other adjustable techniques used in the other similar studies. PMID:25250293

  14. A survey of outcome of adjustable suture as first operation in patients with strabismus.

    PubMed

    Razmjoo, Hasan; Attarzadeh, Hosein; Karbasi, Najmeh; Najarzadegan, Mohammad Reza; Salam, Hasan; Jamshidi, Aliraza

    2014-01-01

    Adjustable suture used for years to improve the outcome of strabismus surgery. We surveyed outcome of our patients with strabismus who underwent adjustable suture. This retrospective study was performed at Ophthalmology Centre of Feiz Hospital in Isfahan on 95 participants that candidate for adjustable suture strabismus surgery. Patients were divided into three age groups: Under 10 years, 10-19 years, and 20 years and over. Outcome of adjustable suture surgery consequence of residual postoperative deviation was divided into four groups: Excellent, good, acceptable, and unacceptable. Out of 95 patients studied, 51 (53.7%) were males and 44 (46.3%) were females. The mean of deviation angles were 53.8 ± 17.9 PD (Prism dioptres) in alt XT, 44.5 ± 12 PD in alt ET and 52 ± 13.5 PD in const ET, 47.1 ± 13.1PD in cons XT, respectively. There was no significant difference between the groups (P = 0.051). Results of surgery were in 38 patients (40%) excellent, in 31 patients (32.6%) good, in 19 patients (20%) acceptable, and in 7 patients (7.4%) unacceptable. Seven (7.4%) patients required reoperation. In the present study, the frequency of re-operation was much lower than other similar studies (7.4% vs. 30-50%). This suggests that the adjustable technique that used in our study can be associated with lower reoperation than other adjustable techniques used in the other similar studies.

  15. Surface Roughness of Composite Resins after Simulated Toothbrushing with Different Dentifrices.

    PubMed

    Monteiro, Bruna; Spohr, Ana Maria

    2015-07-01

    The aim of the study was to evaluate, in vitro, the surface roughness of two composite resins submitted to simulated toothbrushing with three different dentifrices. Totally, 36 samples of Z350XT and 36 samples of Empress Direct were built and randomly divided into three groups (n = 12) according to the dentifrice used (Oral-B Pro-Health Whitening [OBW], Colgate Sensitive Pro-Relief [CS], Colgate Total Clean Mint 12 [CT12]). The samples were submitted to 5,000, 10,000 or 20,000 cycles of simulated toothbrushing. After each simulated period, the surface roughness of the samples was measured using a roughness tester. According to three-way analysis of variance, dentifrice (P = 0.044) and brushing time (P = 0.000) were significant. The composite resin was not significant (P = 0.381) and the interaction among the factors was not significant (P > 0.05). The mean values of the surface roughness (µm) followed by the same letter represent no statistical difference by Tukey's post-hoc test (P <0.05): Dentifrice: CT12 = 0.269(a); CS Pro- Relief = 0.300(ab); OBW = 0.390(b). Brushing time: Baseline = 0,046ª; 5,000 cycles = 0.297(b); 10,000 cycles = 0.354(b); 20,000 cycles = 0.584(c). Z350 XT and Empress Direct presented similar surface roughness after all cycles of simulated toothbrushing. The higher the brushing time, the higher the surface roughness of composite resins. The dentifrice OBW caused a higher surface roughness in both composite resins.

  16. Surface roughness of novel resin composites polished with one-step systems.

    PubMed

    Ergücü, Z; Türkün, L S

    2007-01-01

    This study: 1) analyzed the surface roughness of five novel resin composites that contain nanoparticles after polishing with three different one-step systems and 2) evaluated the effectiveness of these polishers and their possible surface damage using scanning electron microscope (SEM) analysis. The resin composites evaluated in this study include CeramX, Filtek Supreme XT, Grandio, Premise and Tetric EvoCeram. A total of 100 discs (20/resin composites, 10 x 2 mm) were fabricated. Five specimens/resin composites cured under Mylar strips served as the control. The samples were polished for 30 seconds with PoGo, OptraPol and One Gloss discs at 15,000 rpm using a slow speed handpiece. The surfaces were tested for roughness (Ra) with a surface roughness tester and examined with SEM. One-way ANOVA was used for statistical analysis (p = 0.05). For all the composites tested, differences between the polishing systems were found to be significant (p < 0.05). For Filtek Supreme XT, Mylar and PoGo created equally smooth surfaces, while significantly rougher surfaces were obtained after OptraPol and One Gloss applications. For Grandio, Mylar and PoGo created equally smooth surfaces, while OptraPol and One Gloss produced equally rougher surfaces. Tetric EvoCeram exhibited the roughest surface with OptraPol, while no significant differences were found between Premise and CeramX. According to SEM images, OptraPol and One Gloss scratched and plucked the particles away from the surface, while PoGo created a uniform finish, although the roughness values were not the same for each composite. Effectiveness of the polishers seems to be material dependent.

  17. Benchmark tests on the digital equipment corporation Alpha AXP 21164-based AlphaServer 8400, including a comparison of optimized vector and superscalar processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wasserman, H.J.

    1996-02-01

    The second generation of the Digital Equipment Corp. (DEC) DECchip Alpha AXP microprocessor is referred to as the 21164. From the viewpoint of numerically-intensive computing, the primary difference between it and its predecessor, the 21064, is that the 21164 has twice the multiply/add throughput per clock period (CP), a maximum of two floating point operations (FLOPS) per CP vs. one for 21064. The AlphaServer 8400 is a shared-memory multiprocessor server system that can accommodate up to 12 CPUs and up to 14 GB of memory. In this report we will compare single processor performance of the 8400 system with thatmore » of the International Business Machines Corp. (IBM) RISC System/6000 POWER-2 microprocessor running at 66 MHz, the Silicon Graphics, Inc. (SGI) MIPS R8000 microprocessor running at 75 MHz, and the Cray Research, Inc. CRAY J90. The performance comparison is based on a set of Fortran benchmark codes that represent a portion of the Los Alamos National Laboratory supercomputer workload. The advantage of using these codes, is that the codes also span a wide range of computational characteristics, such as vectorizability, problem size, and memory access pattern. The primary disadvantage of using them is that detailed, quantitative analysis of performance behavior of all codes on all machines is difficult. One important addition to the benchmark set appears for the first time in this report. Whereas the older version was written for a vector processor, the newer version is more optimized for microprocessor architectures. Therefore, we have for the first time, an opportunity to measure performance on a single application using implementations that expose the respective strengths of vector and superscalar architecture. All results in this report are from single processors. A subsequent article will explore shared-memory multiprocessing performance of the 8400 system.« less

  18. Upper Bounds on the Expected Value of a Convex Function Using Gradient and Conjugate Function Information.

    DTIC Science & Technology

    1987-08-01

    of the absolute difference between the random variable and its mean.Gassmann and Ziemba 119861 provide a weaker bound that does not require...2.8284, and EX4tV) -12 EX’iX) = -42. Hence C = -2 -€t* i-4’]= I-- . 1213. £1 2 5 COMPARISONS OF BOUNDS IN IIn Gassmann and Ziemba 11986) extend an idea...solution of the foLLowing Linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m m-GZ=max(XT(vi) I: z. 1=1,Z vo=x io (5.1hk i-l i=i i=1 I I where 0

  19. Forecasting and Related Problems in China

    DTIC Science & Technology

    1944-12-01

    tsoüghs, thoiar .&*ia$«5».ej? _- or lack of effeet la , hot thereby decreased. 5me ©«istenca of such Hutap troughs has 1*34 tö srueh dl^cüjssioä -of...8217 of all frontal systeais., surface or upper, in Isdia. la v%m of. 4h£ - recexefe trend to reclaasify the so-roo-tied %oatorr, disturbances* into...f*«’ö«*"oKfc?ö{»..-d3^:’ . tection is liiaiied to ^Ttgr jnse-.^;-- ör dUTlA0 p.’Stm’r:-* Xt Iv.c vvf.!&. .fctflii !&M there i« ä frequent las

  20. Computer Program for Calculation of Separated Turbulent Flows on Axisymmetric Afterbodies including Exhaust Plume Effects

    DTIC Science & Technology

    1979-03-01

    automatically extended to match the inviscid grid. 53 AEDC-T R-79-4 XT DXP HLIM CFCI DELTA1 DELSTI UEI DUEDX NR XRP ,RL Axial location of...layer-edge velocity gradient at initial boundary-layer station. Integer number of values of XRP and RL to be input for body shape. If NSHPBL = 0, this...If LSHPBL = 0 and LPROG = 0, skip items 20 and 21 NR XRP ,RL 715 I5 2FI0.0 8FI0.0 5F10.0 2FI0.0 2f10.0 I615 2FI0.0 125 AEDC-TR-79-4

Top