Sample records for chip multiprocessor reliability

  1. Design and implementation of a modulator-based free-space optical backplane for multiprocessor applications.

    PubMed

    Kirk, Andrew G; Plant, David V; Szymanski, Ted H; Vranesic, Zvonko G; Tooley, Frank A P; Rolston, David R; Ayliffe, Michael H; Lacroix, Frederic K; Robertson, Brian; Bernier, Eric; Brosseau, Daniel F

    2003-05-10

    Design and implementation of a free-space optical backplane for multiprocessor applications is presented. The system is designed to interconnect four multiprocessor nodes that communicate by using multiplexed 32-bit packets. Each multiprocessor node is electrically connected to an optoelectronic VLSI chip which implements the hyperplane interconnection architecture. The chips each contain 256 optical transmitters (implemented as dual-rail multiple quantum-well modulators) and 256 optical receivers. A rigid free-space microoptical interconnection system that interconnects the transceiver chips in a 512-channel unidirectional ring is implemented. Full design, implementation, and operational details are provided.

  2. Design and implementation of a modulator-based free-space optical backplane for multiprocessor applications

    NASA Astrophysics Data System (ADS)

    Kirk, Andrew G.; Plant, David V.; Szymanski, Ted H.; Vranesic, Zvonko G.; Tooley, Frank A. P.; Rolston, David R.; Ayliffe, Michael H.; Lacroix, Frederic K.; Robertson, Brian; Bernier, Eric; Brosseau, Daniel F.

    2003-05-01

    Design and implementation of a free-space optical backplane for multiprocessor applications is presented. The system is designed to interconnect four multiprocessor nodes that communicate by using multiplexed 32-bit packets. Each multiprocessor node is electrically connected to an optoelectronic VLSI chip which implements the hyperplane interconnection architecture. The chips each contain 256 optical transmitters (implemented as dual-rail multiple quantum-well modulators) and 256 optical receivers. A rigid free-space microoptical interconnection system that interconnects the transceiver chips in a 512-channel unidirectional ring is implemented. Full design, implementation, and operational details are provided.

  3. Optical RAM-enabled cache memory and optical routing for chip multiprocessors: technologies and architectures

    NASA Astrophysics Data System (ADS)

    Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.

    2014-03-01

    The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.

  4. Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications

    DTIC Science & Technology

    2009-05-01

    Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications Gilbert Hendry†, Shoaib Kamil‡?, Aleksandr Biberman†, Johnnie...electronic networks -on-chip warrants investigating real application traces on functionally compa- rable photonic and electronic network designs. We... network can achieve 75× improvement in energy ef- ficiency for synthetic benchmarks and up to 37× improve- ment for real scientific applications

  5. Realtime multiprocessor for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Jungeblut, T.; Grünewald, M.; Porrmann, M.; Rückert, U.

    2008-05-01

    This paper introduces a real-time Multiprocessor System-On-Chip (MPSoC) for low power wireless applications. The multiprocessor is based on eight 32bit RISC processors that are connected via an Network-On-Chip (NoC). The NoC follows a novel approach with guaranteed bandwidth to the application that meets hard realtime requirements. At a clock frequency of 100 MHz the total power consumption of the MPSoC that has been fabricated in 180 nm UMC standard cell technology is 772 mW.

  6. An Adaptive Insertion and Promotion Policy for Partitioned Shared Caches

    NASA Astrophysics Data System (ADS)

    Mahrom, Norfadila; Liebelt, Michael; Raof, Rafikha Aliana A.; Daud, Shuhaizar; Hafizah Ghazali, Nur

    2018-03-01

    Cache replacement policies in chip multiprocessors (CMP) have been investigated extensively and proven able to enhance shared cache management. However, competition among multiple processors executing different threads that require simultaneous access to a shared memory may cause cache contention and memory coherence problems on the chip. These issues also exist due to some drawbacks of the commonly used Least Recently Used (LRU) policy employed in multiprocessor systems, which are because of the cache lines residing in the cache longer than required. In image processing analysis of for example extra pulmonary tuberculosis (TB), an accurate diagnosis for tissue specimen is required. Therefore, a fast and reliable shared memory management system to execute algorithms for processing vast amount of specimen image is needed. In this paper, the effects of the cache replacement policy in a partitioned shared cache are investigated. The goal is to quantify whether better performance can be achieved by using less complex replacement strategies. This paper proposes a Middle Insertion 2 Positions Promotion (MI2PP) policy to eliminate cache misses that could adversely affect the access patterns and the throughput of the processors in the system. The policy employs a static predefined insertion point, near distance promotion, and the concept of ownership in the eviction policy to effectively improve cache thrashing and to avoid resource stealing among the processors.

  7. Testing and operating a multiprocessor chip with processor redundancy

    DOEpatents

    Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J

    2014-10-21

    A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.

  8. Power-Aware Compiler Controllable Chip Multiprocessor

    NASA Astrophysics Data System (ADS)

    Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori

    A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.

  9. Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design

    NASA Astrophysics Data System (ADS)

    Debes, Eric; Kaine, Greg

    2002-11-01

    In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.

  10. Reproducibility in a multiprocessor system

    DOEpatents

    Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka

    2013-11-26

    Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.

  11. A Low-Cost and Energy-Efficient Multiprocessor System-on-Chip for UWB MAC Layer

    NASA Astrophysics Data System (ADS)

    Xiao, Hao; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki; Nakase, Yuko; Kimura, Sadahiro

    Ultra-wideband (UWB) technology has attracted much attention recently due to its high data rate and low emission power. Its media access control (MAC) protocol, WiMedia MAC, promises a lot of facilities for high-speed and high-quality wireless communication. However, these benefits in turn involve a large amount of computational load, which challenges the traditional uniprocessor architecture based implementation method to provide the required performance. However, the constrained cost and power budget, on the other hand, makes using commercial multiprocessor solutions unrealistic. In this paper, a low-cost and energy-efficient multiprocessor system-on-chip (MPSoC), which tackles at once the aspects of system design, software migration and hardware architecture, is presented for the implementation of UWB MAC layer. Experimental results show that the proposed MPSoC, based on four simple RISC processors and shared-memory infrastructure, achieves up to 45% performance improvement and 65% power saving, but takes 15% less area than the uniprocessor implementation.

  12. Clocking and Synchronization Circuits in Multiprocessor Systems

    DTIC Science & Technology

    1989-04-01

    18 3.4 Inter -chip Clocking Strategies...may occur when two or more of the switches make transitions at different times during the inter - val during which those inputs are being processed...increased without any fruitful computation. The sources of the inter -chip clock skew are the electromagnetic propagation delay, the buffer delay within

  13. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  14. Controlled replication: reduce the capacity occupied by redundant replicas in tiled chip multiprocessors

    NASA Astrophysics Data System (ADS)

    Li, Hao; Xie, Lunguo

    2013-03-01

    The design of cache system for Chip Multiprocessor (CMP) face many challenges because future CMPs will have more cores and greater on-chip cache capacity. There are two base design schemes about L2 cache: private scheme in which each L2 slice is treated as a private L2 cache and shared scheme in which all L2 slices are treated as a large L2 cache shared by all cores. Private caches provide the lowest hit latency but reduce the total effective cache capacity. A shared L2 cache increases the effective cache capacity but has long hit latencies when data is on a remote tile. This paper present a new Controlled Replication (CR) policy to reduce the capacities occupied by redundant shared replicas. the new CR policy increases the effective capacity than victim replication scheme and has lower hit latency than shared scheme. We evaluate the various schemes using full-system simulation of parallel applications. Results show that CR reduces the average memory access latency of shared scheme by an average of 13%, providing better overall performance than victim replication and shared schemes.

  15. A Multiprocessor SoC Architecture with Efficient Communication Infrastructure and Advanced Compiler Support for Easy Application Development

    NASA Astrophysics Data System (ADS)

    Urfianto, Mohammad Zalfany; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki

    This paper presentss a Multiprocessor System-on-Chips (MPSoC) architecture used as an execution platform for the new C-language based MPSoC design framework we are currently developing. The MPSoC architecture is based on an existing SoC platform with a commercial RISC core acting as the host CPU. We extend the existing SoC with a multiprocessor-array block that is used as the main engine to run parallel applications modeled in our design framework. Utilizing several optimizations provided by our compiler, an efficient inter-communication between processing elements with minimum overhead is implemented. A host-interface is designed to integrate the existing RISC core to the multiprocessor-array. The experimental results show that an efficacious integration is achieved, proving that the designed communication module can be used to efficiently incorporate off-the-shelf processors as a processing element for MPSoC architectures designed using our framework.

  16. The Unification of Space Qualified Integrated Circuits by Example of International Space Project GAMMA-400

    NASA Astrophysics Data System (ADS)

    Bobkov, S. G.; Serdin, O. V.; Arkhangelskiy, A. I.; Arkhangelskaja, I. V.; Suchkov, S. I.; Topchiev, N. P.

    The problem of electronic component unification at the different levels (circuits, interfaces, hardware and software) used in space industry is considered. The task of computer systems for space purposes developing is discussed by example of scientific data acquisition system for space project GAMMA-400. The basic characteristics of high reliable and fault tolerant chips developed by SRISA RAS for space applicable computational systems are given. To reduce power consumption and enhance data reliability, embedded system interconnect made hierarchical: upper level is Serial RapidIO 1x or 4x with rate transfer 1.25 Gbaud; next level - SpaceWire with rate transfer up to 400 Mbaud and lower level - MIL-STD-1553B and RS232/RS485. The Ethernet 10/100 is technology interface and provided connection with the previously released modules too. Systems interconnection allows creating different redundancy systems. Designers can develop heterogeneous systems that employ the peer-to-peer networking performance of Serial RapidIO using multiprocessor clusters interconnected by SpaceWire.

  17. Validation of fault-free behavior of a reliable multiprocessor system - FTMP: A case study. [Fault-Tolerant Multi-Processor avionics

    NASA Technical Reports Server (NTRS)

    Clune, E.; Segall, Z.; Siewiorek, D.

    1984-01-01

    A program of experiments has been conducted at NASA-Langley to test the fault-free performance of a Fault-Tolerant Multiprocessor (FTMP) avionics system for next-generation aircraft. Baseline measurements of an operating FTMP system were obtained with respect to the following parameters: instruction execution time, frame size, and the variation of clock ticks. The mechanisms of frame stretching were also investigated. The experimental results are summarized in a table. Areas of interest for future tests are identified, with emphasis given to the implementation of a synthetic workload generation mechanism on FTMP.

  18. Low-power, transparent optical network interface for high bandwidth off-chip interconnects.

    PubMed

    Liboiron-Ladouceur, Odile; Wang, Howard; Garg, Ajay S; Bergman, Keren

    2009-04-13

    The recent emergence of multicore architectures and chip multiprocessors (CMPs) has accelerated the bandwidth requirements in high-performance processors for both on-chip and off-chip interconnects. For next generation computing clusters, the delivery of scalable power efficient off-chip communications to each compute node has emerged as a key bottleneck to realizing the full computational performance of these systems. The power dissipation is dominated by the off-chip interface and the necessity to drive high-speed signals over long distances. We present a scalable photonic network interface approach that fully exploits the bandwidth capacity offered by optical interconnects while offering significant power savings over traditional E/O and O/E approaches. The power-efficient interface optically aggregates electronic serial data streams into a multiple WDM channel packet structure at time-of-flight latencies. We demonstrate a scalable optical network interface with 70% improvement in power efficiency for a complete end-to-end PCI Express data transfer.

  19. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  20. [A novel biologic electricity signal measurement based on neuron chip].

    PubMed

    Lei, Yinsheng; Wang, Mingshi; Sun, Tongjing; Zhu, Qiang; Qin, Ran

    2006-06-01

    Neuron chip is a multiprocessor with three pipeline CPU; its communication protocol and control processor are integrated in effect to carry out the function of communication, control, attemper, I/O, etc. A novel biologic electronic signal measurement network system is composed of intelligent measurement nodes with neuron chip at the core. In this study, the electronic signals such as ECG, EEG, EMG and BOS can be synthetically measured by those intelligent nodes, and some valuable diagnostic messages are found. Wavelet transform is employed in this system to analyze various biologic electronic signals due to its strong time-frequency ability of decomposing signal local character. Better effect is gained. This paper introduces the hardware structure of network and intelligent measurement node, the measurement theory and the signal figure of data acquisition and processing.

  1. Comparisons between Intel 386 and i486 microprecessors

    NASA Technical Reports Server (NTRS)

    Liu, Yuan-Kwei

    1989-01-01

    A quick and preliminary comparison is made between the Intel 386 and i486 microprocessors. The following topics are discussed: the i486 key elements, comparison of instruction set architecture, the i486 on-chip cache characteristics, the i486 multiprocessor support, comparison of performance, comparison of power consumption, comparison of radiation hardening potential, and recommendations for the Space Station Freedom (SSF) Data Management System (DMS).

  2. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of GWOR in optical communication and BFT in non-uniform traffic communication and three-dimension (3D) implementation. 5. A cycle-accurate NoC simulator is developed to evaluate the performance of proposed HONoC architectures. It is a comprehensive platform that can simulate both electronic and optical NoCs. Different size HONoC architectures are evaluated in terms of throughput, latency and energy dissipation. Simulation results confirm that HONoC achieves good network performance with lower power consumption.

  3. FTMP - A highly reliable Fault-Tolerant Multiprocessor for aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, A. L., Jr.; Smith, T. B., III; Lala, J. H.

    1978-01-01

    The FTMP (Fault-Tolerant Multiprocessor) is a complex multiprocessor computer that employs a form of redundancy related to systems considered by Mathur (1971), in which each major module can substitute for any other module of the same type. Despite the conceptual simplicity of the redundancy form, the implementation has many intricacies owing partly to the low target failure rate, and partly to the difficulty of eliminating single-fault vulnerability. An extensive analysis of the computer through the use of such modeling techniques as Markov processes and combinatorial mathematics shows that for random hard faults the computer can meet its requirements. It is also shown that the maintenance scheduled at intervals of 200 hr or more can be adequate most of the time.

  4. A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories

    DTIC Science & Technology

    1989-02-01

    frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a

  5. Scheduling for energy and reliability management on multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  6. Modeling and measurement of fault-tolerant multiprocessors

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Woodbury, M. H.; Lee, Y. H.

    1985-01-01

    The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented.

  7. Energy-efficient fault tolerance in multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Guo, Yifeng

    The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is investigated, where tasks' main copies are executed ASAP while backup copies ALAP to reduce the overlapped execution of main and backup copies of the same task and thus reduce energy consumption. All proposed techniques are evaluated through extensive simulations and compared with other state-of-the-art approaches. The simulation results confirm that the proposed schemes can preserve the system reliability while still achieving substantial energy savings. Finally, for both SS and POED based Energy-Efficient Fault-Tolerant (EEFT) schemes, a series of recovery strategies are designed when more than one (transient and permanent) faults need to be tolerated.

  8. Safe and Efficient Support for Embeded Multi-Processors in ADA

    NASA Astrophysics Data System (ADS)

    Ruiz, Jose F.

    2010-08-01

    New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.

  9. Utilizing Dynamically Coupled Cores to Form a Resilient Chip Multiprocessor

    DTIC Science & Technology

    2007-06-01

    requires a significant deviation from previous work. For instance, we find that using the relaxed input replication model from Reunion incurs a...Circuit Width Delay Count CRC-16 16 6.65 754 CRC- SDLC -16 16 6.10 888 CRC-32 16 7.28 2260 CRC-32 32 8.60 4240 Table 1. FO4 delay and transistor count for...the operation of our proposed system is the same in all other respects. 4.4 Compatibility Across Memory Consis- tency Models The memory consistency

  10. Evaluation of SuperLU on multicore architectures

    NASA Astrophysics Data System (ADS)

    Li, X. S.

    2008-07-01

    The Chip Multiprocessor (CMP) will be the basic building block for computer systems ranging from laptops to supercomputers. New software developments at all levels are needed to fully utilize these systems. In this work, we evaluate performance of different high-performance sparse LU factorization and triangular solution algorithms on several representative multicore machines. We included both Pthreads and MPI implementations in this study and found that the Pthreads implementation consistently delivers good performance and that a left-looking algorithm is usually superior.

  11. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  12. A pipelined architecture for real time correction of non-uniformity in infrared focal plane arrays imaging system using multiprocessors

    NASA Astrophysics Data System (ADS)

    Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan

    2010-07-01

    This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.

  13. Synchronizing Data-Bus Messages

    NASA Technical Reports Server (NTRS)

    Harris, L. H.

    1985-01-01

    Adapter allows communications among as many as 30 data processors without central bus controller. Adapter improves reliability of multiprocessor system by eliminating point of failure that causes entire system to fail. Scheme prevents data collisions and eliminates nonessential polling, thereby reducing power consumption.

  14. Programmable architecture for pixel level processing tasks in lightweight strapdown IR seekers

    NASA Astrophysics Data System (ADS)

    Coates, James L.

    1993-06-01

    Typical processing tasks associated with missile IR seeker applications are described, and a straw man suite of algorithms is presented. A fully programmable multiprocessor architecture is realized on a multimedia video processor (MVP) developed by Texas Instruments. The MVP combines the elements of RISC, floating point, advanced DSPs, graphics processors, display and acquisition control, RAM, and external memory. Front end pixel level tasks typical of missile interceptor applications, operating on 256 x 256 sensor imagery, can be processed at frame rates exceeding 100 Hz in a single MVP chip.

  15. Optical interconnection networks for high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  16. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 4: FTMP executive summary

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III; Lala, J. H.

    1984-01-01

    The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts.

  17. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  18. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  19. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  20. Multi-scale reflection modulator-based optical interconnects

    NASA Astrophysics Data System (ADS)

    Nair, Rohit

    This dissertation describes the design, analysis, and experimental validation of micro- and macro-optical components for implementing optical interconnects at multiple scales for varied applications. Three distance scales are explored: millimeter, centimeter, and meter-scales. At the millimeter-scale, we propose the use of optical interconnects at the intra-chip level. With the rapid scaling down of CMOS critical dimensions in accordance to Moore's law, the bandwidth requirements of global interconnects in microprocessors has exceeded the capabilities of metal links. These are the wires that connect the most remote parts of the chip and are disproportionately problematic in terms of chip area and power consumption. Consequently, in the mid-2000s, we saw a shift in the chip architecture: a move towards multicore designs. However, this only delays the inevitable communication bottleneck between cores. To satisfy this bandwidth, we propose to replace the global metal interconnects with optical interconnects. We propose to use the hybrid integration of silicon with GaAs/AlAs-based multiple quantum well devices as optical modulators and photodetectors along with polymeric waveguides to transport the light. We use grayscale lithography to fabricate curved facets into the waveguides to couple light into the modulators and photodetectors. Next, at the chip-to-chip level in high-performance multiprocessor computing systems, communication distances vary from a few centimeters to tens of centimeters. An optical design for coupling light from off-chip lasers to on-chip surface-normal modulators is proposed in order to implement chip-to-chip free-space optical interconnects. The method uses a dual-prism module constructed from prisms made of two different glasses. The various alignment tolerances of the proposed system are investigated and found to be well within pick-and-place accuracies. For the off-chip lasers, vertical cavity surface emitting lasers (VCSELs) are proposed. The rationale behind using on-chip modulators rather than VCSELs is to avoid VCSEL thermal loads on chip, and because of higher reliability of modulators than VCSELs. Particularly above 10Gbps, an empirical model developed shows the rapid decrease of VCSEL median time to failure vs. data rate. Thus the proposed interconnect scheme which utilizes continuous wave VCSELs that are externally modulated by on-chip multiple quantum well modulators is applicable for chip-to-chip optical interconnects at 20Gbps and higher line data rates. Finally, for applications such as remote telemetry, where the interrogation distances can vary from a few meters to tens or even hundreds of meters we demonstrate a modulated retroreflector that utilizes InGaAs/InAlAs-based large-area multiple quantum well modulators on all three faces of a retroreflector. The large-area devices, fabricated by metalorganic chemical vapor deposition, are characterized in terms of the yield and leakage currents. A yield higher than that achieved previously using devices fabricated by molecular beam epitaxy is observed. The retroreflector module is constructed using standard FR4 printed circuit boards, thereby simplifying the wiring issue. A high optical contrast ratio of 8.23dB is observed for a drive of 20V. A free-standing PCB retroreflector is explored and found to have insufficient angular tolerances (+/-0.5 degrees). We show that the angular errors in the corner-cube construction can be corrected for using off-the-shelf optical components as opposed to mounting the PCBs on a precision corner cube, as has been done previously.

  1. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  2. Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing

    NASA Technical Reports Server (NTRS)

    Dobbs, Carl, Sr.

    2012-01-01

    A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software

  3. Multiprocessor shared-memory information exchange

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.

    1989-02-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, ismore » designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange.« less

  4. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  5. Reliability study of high-brightness multiple single emitter diode lasers

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Yang, Thomas; Zhang, Cuipeng; Lang, Chao; Jiang, Xiaochen; Liu, Rui; Gao, Yanyan; Guo, Weirong; Jiang, Yuhua; Liu, Yang; Zhang, Luyan; Chen, Louisa

    2015-03-01

    In this study the chip bonding processes for various chips from various chip suppliers around the world have been optimized to achieve reliable chip on sub-mount for high performance. These chip on sub-mounts, for examples, includes three types of bonding, 8xx nm-1.2W/10.0W Indium bonded lasers, 9xx nm 10W-20W AuSn bonded lasers and 1470 nm 6W Indium bonded lasers will be reported below. The MTTF@25 of 9xx nm chip on sub-mount (COS) is calculated to be more than 203,896 hours. These chips from various chip suppliers are packaged into many multiple single emitter laser modules, using similar packaging techniques from 2 emitters per module to up to 7 emitters per module. A reliability study including aging test is performed on those multiple single emitter laser modules. With research team's 12 years' experienced packaging design and techniques, precise optical and fiber alignment processes and superior chip bonding capability, we have achieved a total MTTF exceeding 177,710 hours of life time with 60% confidence level for those multiple single emitter laser modules. Furthermore, a separated reliability study on wavelength stabilized laser modules have shown this wavelength stabilized module packaging process is reliable as well.

  6. Survey of critical failure events in on-chip interconnect by fault tree analysis

    NASA Astrophysics Data System (ADS)

    Yokogawa, Shinji; Kunii, Kyousuke

    2018-07-01

    In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.

  7. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  8. Flip-chip assembly and reliability using gold/tin solder bumps

    NASA Astrophysics Data System (ADS)

    Oppermann, Hermann; Hutter, Matthias; Klein, Matthias; Reichl, Herbert

    2004-09-01

    Au/Sn solder bumps are commonly used for flip chip assembly of optoelectronic and RF devices. They allow a fluxless assembly which is required to avoid contamination at optical interfaces. Flip chip assembly experiments were carried out using as plated Au/Sn bumps without prior bump reflow. An RF and reliability test vehicles comprise a GaAs chip which was flip chip soldered on a silicon substrate. Temperature cycling tests with and without underfiller were performed and the results are presented. The different failure modes for underfilled and non-underfilled samples were discussed and compared. Additional reliability tests were performed with flip chip bonding by gold thermocompression for comparison. The test results and the failure modes are discussed in detail.

  9. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  10. Embedded Multiprocessor Technology for VHSIC Insertion

    NASA Technical Reports Server (NTRS)

    Hayes, Paul J.

    1990-01-01

    Viewgraphs on embedded multiprocessor technology for VHSIC insertion are presented. The objective was to develop multiprocessor system technology providing user-selectable fault tolerance, increased throughput, and ease of application representation for concurrent operation. The approach was to develop graph management mapping theory for proper performance, model multiprocessor performance, and demonstrate performance in selected hardware systems.

  11. Validation of multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Siewiorek, D. P.; Segall, Z.; Kong, T.

    1982-01-01

    Experiments that can be used to validate fault free performance of multiprocessor systems in aerospace systems integrating flight controls and avionics are discussed. Engineering prototypes for two fault tolerant multiprocessors are tested.

  12. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  13. Flexible organic TFT bio-signal amplifier using reliable chip component assembly process with conductive adhesive.

    PubMed

    Yoshimoto, Shusuke; Uemura, Takafumi; Akiyama, Mihoko; Ihara, Yoshihiro; Otake, Satoshi; Fujii, Tomoharu; Araki, Teppei; Sekitani, Tsuyoshi

    2017-07-01

    This paper presents a flexible organic thin-film transistor (OTFT) amplifier for bio-signal monitoring and presents the chip component assembly process. Using a conductive adhesive and a chip mounter, the chip components are mounted on a flexible film substrate, which has OTFT circuits. This study first investigates the assembly technique reliability for chip components on the flexible substrate. This study also specifically examines heart pulse wave monitoring conducted using the proposed flexible amplifier circuit and a flexible piezoelectric film. We connected the amplifier to a bluetooth device for a wearable device demonstration.

  14. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  15. Queueing analysis of a canonical model of real-time multiprocessors

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, K. G.

    1983-01-01

    A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.

  16. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  17. Product assurance technology for procuring reliable, radiation-hard, custom LSI/VLSI electronics

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Allen, R. A.; Blaes, B. R.; Hicks, K. A.; Jennings, G. A.; Lin, Y.-S.; Pina, C. A.; Sayah, H. R.; Zamani, N.

    1989-01-01

    Advanced measurement methods using microelectronic test chips are described. These chips are intended to be used in acquiring the data needed to qualify Application Specific Integrated Circuits (ASIC's) for space use. Efforts were focused on developing the technology for obtaining custom IC's from CMOS/bulk silicon foundries. A series of test chips were developed: a parametric test strip, a fault chip, a set of reliability chips, and the CRRES (Combined Release and Radiation Effects Satellite) chip, a test circuit for monitoring space radiation effects. The technical accomplishments of the effort include: (1) development of a fault chip that contains a set of test structures used to evaluate the density of various process-induced defects; (2) development of new test structures and testing techniques for measuring gate-oxide capacitance, gate-overlap capacitance, and propagation delay; (3) development of a set of reliability chips that are used to evaluate failure mechanisms in CMOS/bulk: interconnect and contact electromigration and time-dependent dielectric breakdown; (4) development of MOSFET parameter extraction procedures for evaluating subthreshold characteristics; (5) evaluation of test chips and test strips on the second CRRES wafer run; (6) two dedicated fabrication runs for the CRRES chip flight parts; and (7) publication of two papers: one on the split-cross bridge resistor and another on asymmetrical SRAM (static random access memory) cells for single-event upset analysis.

  18. Alpha: A real-time decentralized operating system for mission-oriented system integration and operation

    NASA Technical Reports Server (NTRS)

    Jensen, E. Douglas

    1988-01-01

    Alpha is a new kind of operating system that is unique in two highly significant ways. First, it is decentralized transparently providing reliable resource management across physically dispersed nodes, so that distributed applications programming can be done largely as though it were centralized. And second, it provides comprehensive, high technology support for real-time system integration and operation, an application area which consists predominately of aperiodic activities having critical time constraints such as deadlines. Alpha is extremely adaptable so that it can be easily optimized for a wide range of problem-specific functionality, performance, and cost. Alpha is the first systems effort of the Archons Project, and the prototype was created at Carnegie-Mellon University directly on modified Sun multiprocessor workstation hardware. It has been demonstrated with a real-time C(sup 2) application. Continuing research is leading to a series of enhanced follow-ons to Alpha; these are portable but initially hosted on Concurrent's MASSCOMP line of multiprocessor products.

  19. Fault-free behavior of reliable multiprocessor systems: FTMP experiments in AIRLAB

    NASA Technical Reports Server (NTRS)

    Clune, E.; Segall, Z.; Siewiorek, D.

    1985-01-01

    This report describes a set of experiments which were implemented on the Fault tolerant Multi-Processor (FTMP) at NASA/Langley's AIRLAB facility. These experiments are part of an effort to formulate and evaluate validation methodologies for fault-tolerant computers. This report deals with the measurement of single parameters (baselines) of a fault free system. The initial set of baseline experiments lead to the following conclusions: (1) The system clock is constant and independent of workload in the tested cases; (2) the instruction execution times are constant; (3) the R4 frame size is 40mS with some variation; (4) the frame stretching mechanism has some flaws in its implementation that allow the possibility of an infinite stretching of frame duration. Future experiments are planned. Some will broaden the results of these initial experiments. Others will measure the system more dynamically. The implementation of a synthetic workload generation mechanism for FTMP is planned to enhance the experimental environment of the system.

  20. Iterative algorithms for tridiagonal matrices on a WSI-multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.D.; Sameh, A.H.; Wisniewski, J.A.

    1982-01-01

    With the rapid advances in semiconductor technology, the construction of Wafer Scale Integration (WSI)-multiprocessors consisting of a large number of processors is now feasible. We illustrate the implementation of some basic linear algebra algorithms on such multiprocessors.

  1. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G; Salapura, Valentina

    2014-12-02

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  2. A fault-tolerant multiprocessor architecture for aircraft, volume 1. [autopilot configuration

    NASA Technical Reports Server (NTRS)

    Smith, T. B.; Hopkins, A. L.; Taylor, W.; Ausrotas, R. A.; Lala, J. H.; Hanley, L. D.; Martin, J. H.

    1978-01-01

    A fault-tolerant multiprocessor architecture is reported. This architecture, together with a comprehensive information system architecture, has important potential for future aircraft applications. A preliminary definition and assessment of a suitable multiprocessor architecture for such applications is developed.

  3. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  4. Effect of thermal cycling ramp rate on CSP assembly reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    2001-01-01

    A JPL-led chip scale package consortium of enterprises recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages for a variety of projects. The experience of the consortium in building more than 150 test vehicle assemblies, single and double sided multilayer PWBs, and the environmental test results has now been published as a chip scale package guidelines document.

  5. A Chip and Pixel Qualification Methodology on Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank

    2004-01-01

    This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.

  6. Advanced Flip Chips in Extreme Temperature Environments

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni

    2010-01-01

    The use of underfill materials is necessary with flip-chip interconnect technology to redistribute stresses due to mismatching coefficients of thermal expansion (CTEs) between dissimilar materials in the overall assembly. Underfills are formulated using organic polymers and possibly inorganic filler materials. There are a few ways to apply the underfills with flip-chip technology. Traditional capillary-flow underfill materials now possess high flow speed and reduced time to cure, but they still require additional processing steps beyond the typical surface-mount technology (SMT) assembly process. Studies were conducted using underfills in a temperature range of -190 to 85 C, which resulted in an increase of reliability by one to two orders of magnitude. Thermal shock of the flip-chip test articles was designed to induce failures at the interconnect sites (-40 to 100 C). The study on the reliability of flip chips using underfills in the extreme temperature region is of significant value for space applications. This technology is considered as an enabling technology for future space missions. Flip-chip interconnect technology is an advanced electrical interconnection approach where the silicon die or chip is electrically connected, face down, to the substrate by reflowing solder bumps on area-array metallized terminals on the die to matching footprints of solder-wettable pads on the chosen substrate. This advanced flip-chip interconnect technology will significantly improve the performance of high-speed systems, productivity enhancement over manual wire bonding, self-alignment during die joining, low lead inductances, and reduced need for attachment of precious metals. The use of commercially developed no-flow fluxing underfills provides a means of reducing the processing steps employed in the traditional capillary flow methods to enhance SMT compatibility. Reliability of flip chips may be significantly increased by matching/tailoring the CTEs of the substrate material and the silicon die or chip, and also the underfill materials. Advanced packaging interconnects technology such as flip-chip interconnect test boards have been subjected to various extreme temperature ranges that cover military specifications and extreme Mars and asteroid environments. The eventual goal of each process step and the entire process is to produce components with 100 percent interconnect and satisfy the reliability requirements. Underfill materials, in general, may possibly meet demanding end use requirements such as low warpage, low stress, fine pitch, high reliability, and high adhesion.

  7. Two Fundamental Issues in Multiprocessing.

    DTIC Science & Technology

    1987-10-01

    Structural Model of a Multiprocessor 6 Figure 5: Operational Model of a Multiprocessor 7 Figure 6: The von Neumann Processor (from Gajski and Peir [201) 10...Computer Society, June, 1983. 20. Gajski , D. D. & J-K. Peir. "Essential Issues in Multiprocessor Systems". Computer 18, 6 (June 1985), 9-27. 21. Gurd

  8. Insertion of coherence requests for debugging a multiprocessor

    DOEpatents

    Blumrich, Matthias A.; Salapura, Valentina

    2010-02-23

    A method and system are disclosed to insert coherence events in a multiprocessor computer system, and to present those coherence events to the processors of the multiprocessor computer system for analysis and debugging purposes. The coherence events are inserted in the computer system by adding one or more special insert registers. By writing into the insert registers, coherence events are inserted in the multiprocessor system as if they were generated by the normal coherence protocol. Once these coherence events are processed, the processing of coherence events can continue in the normal operation mode.

  9. Method and apparatus for single-stepping coherence events in a multiprocessor system under software control

    DOEpatents

    Blumrich, Matthias A.; Salapura, Valentina

    2010-11-02

    An apparatus and method are disclosed for single-stepping coherence events in a multiprocessor system under software control in order to monitor the behavior of a memory coherence mechanism. Single-stepping coherence events in a multiprocessor system is made possible by adding one or more step registers. By accessing these step registers, one or more coherence requests are processed by the multiprocessor system. The step registers determine if the snoop unit will operate by proceeding in a normal execution mode, or operate in a single-step mode.

  10. Shared versus distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.

  11. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  12. A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Hwa-Soo

    1991-01-01

    A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.

  13. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; hide

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  14. VME rollback hardware for time warp multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Robb, Michael J.; Buzzell, Calvin A.

    1992-01-01

    The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.

  15. Object classification for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Regensburger, Uwe; Graefe, Volker

    1991-03-01

    Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.

  16. Assembly reliability of CSPs with various chiip sizes by accelerated thermal and mechanical cycling test

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    2000-01-01

    A JPL-led chip scale package (CSP) Consortium, composed of team members representing government agencies and private companies, recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.

  17. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  18. VxWorks 6.9 for LEON

    NASA Astrophysics Data System (ADS)

    Cederman, Daniel; Hellstrom, Daniel

    2016-08-01

    The VxWorks operating system together with the Cobham Grislier LEON architectural port provides an efficient platform for the development of software for space applications. It supports both uni-and multiprocessor mode (SMP or AMP) and comes with an integrated development environment with several debugging and analysis tools. The LEON architectural port from Cobham Grislier supports LEON2/3/4 systems and includes drivers for all standard on-chip peripherals, as well as support for RASTA boards. In this paper we will highlight some the many features of VxWorks and the LEON architectural port. The latest version of the architectural port now supports VxWorks 6.9 (the previous version was for VxWorks 6.7) and has the support for the GR740, the commercially available quad-core LEON system, designed as the European Space Agency's Next Generation Microprocessor (NGMP).

  19. Resource Management for Distributed Parallel Systems

    NASA Technical Reports Server (NTRS)

    Neuman, B. Clifford; Rao, Santosh

    1993-01-01

    Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.

  20. File-System Workload on a Scientific Multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  1. Multichannel microfluidic chip for rapid and reliable trapping and imaging plant-parasitic nematodes

    NASA Astrophysics Data System (ADS)

    Amrit, Ratthasart; Sripumkhai, Witsaroot; Porntheeraphat, Supanit; Jeamsaksiri, Wutthinan; Tangchitsomkid, Nuchanart; Sutapun, Boonsong

    2013-05-01

    Faster and reliable testing technique to count and identify nematode species resided in plant roots is therefore essential for export control and certification. This work proposes utilizing a multichannel microfluidic chip with an integrated flow-through microfilter to retain the nematodes in a trapping chamber. When trapped, it is rather simple and convenient to capture images of the nematodes and later identify their species by a trained technician. Multiple samples can be tested in parallel using the proposed microfluidic chip therefore increasing number of samples tested per day.

  2. On-clip high frequency reliability and failure test structures

    DOEpatents

    Snyder, Eric S.; Campbell, David V.

    1997-01-01

    Self-stressing test structures for realistic high frequency reliability characterizations. An on-chip high frequency oscillator, controlled by DC signals from off-chip, provides a range of high frequency pulses to test structures. The test structures provide information with regard to a variety of reliability failure mechanisms, including hot-carriers, electromigration, and oxide breakdown. The system is normally integrated at the wafer level to predict the failure mechanisms of the production integrated circuits on the same wafer.

  3. Optimization of a PCRAM Chip for high-speed read and highly reliable reset operations

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyun; Chen, Houpeng; Li, Xi; Wang, Qian; Fan, Xi; Hu, Jiajun; Lei, Yu; Zhang, Qi; Tian, Zhen; Song, Zhitang

    2016-10-01

    The widely used traditional Flash memory suffers from its performance limits such as its serious crosstalk problems, and increasing complexity of floating gate scaling. Phase change random access memory (PCRAM) becomes one of the most potential nonvolatile memories among the new memory techniques. In this paper, a 1M-bit PCRAM chip is designed based on the SMIC 40nm CMOS technology. Focusing on the read and write performance, two new circuits with high-speed read operation and highly reliable reset operation are proposed. The high-speed read circuit effectively reduces the reading time from 74ns to 40ns. The double-mode reset circuit improves the chip yield. This 1M-bit PCRAM chip has been simulated on cadence. After layout design is completed, the chip will be taped out for post-test.

  4. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    NASA Astrophysics Data System (ADS)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face-down devices without the underfill, based on the thorough understanding of the failure modes. Also, practical design guidelines for material, geometry and process parameters for reliable flip-chip devices have been developed.

  5. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G.; Salapura, Valentina

    2012-07-24

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU comprises: a plurality of performance counters each for counting signals representing occurrences of events from one or more the plurality of processor units in the multiprocessor system; and, a plurality of input devices for receiving the event signals from one or more processor devices of the plurality of processor units, the plurality of input devices programmable to select event signals for receipt by one or more of the plurality of performance counters for counting, wherein the PMU is shared between multiple processing units, or within a group of processors in the multiprocessing system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  6. Development and psychometric properties of the Carer - Head Injury Neurobehavioral Assessment Scale (C-HINAS) and the Carer - Head Injury Participation Scale (C-HIPS): patient and family determined outcome scales.

    PubMed

    Deb, Shoumitro; Bryant, Eleanor; Morris, Paul G; Prior, Lindsay; Lewis, Glyn; Haque, Sayeed

    2007-06-01

    Develop and assess the psychometric properties of the Carer - Head Injury Participation Scale (C-HIPS) and its biggest factor the Carer - Head Injury Neurobehavioral Assessment Scale (C-HINAS). Furthermore, the aim was to examine the inter-informant reliability by comparing the self reports of individuals with traumatic brain injury (TBI) with the carer reports on the C-HIPS and the C-HINAS. Thirty-two TBI individuals and 27 carers took part in in-depth qualitative interviews exploring the consequences of the TBI. Interview transcripts were analysed and key themes and concepts were used to construct a 49-item and 58-item patient (Patient - Head Injury Participation Scale [P-HIPS]) and carer outcome measure (C-HIPS) respectively, of which 49 were parallel items and nine additional items were used to assess carer burden. Postal versions of the P-HIPS, C-HIPS, Mayo Portland Adaptability Inventory-3 (MPAI-3), and the Glasgow Outcome Scale-Extended (GOSE) were completed by a cohort of 113 TBI individuals and 80 carers. Data from a sub-group of 66 patient/carer pairs were used to compare inter-informant reliability between the P-HIPS and the C-HIPS, and the P-HINAS and the C-HINAS respectively. All individual 49 items of the C-HIPS and their total score showed good test-retest reliability (0.95) and internal consistency (0.95). Comparisons with the MPAI-3 and GOSE found a good correlation with the MPAI-3 (0.7) and a moderate negative correlation with the GOSE (-0.6). Factor analysis of these items extracted a 4-factor structure which represented the domains 'Emotion/Behavior' (C-HINAS), 'Independence/Community Living', 'Cognition', and 'Physical'. The C-HINAS showed good internal consistency (0.92), test-retest reliability (0.93), and concurrent validity with one MPAI subscale (0.7). Assessment of inter-informant reliability revealed good correspondence between the reports of the patients and the carers for both the C-HIPS (0.83) and the C-HINAS (0.82). Both the C-HINAS and the C-HIPS show strong psychometric properties. The qualitative methodology employed in the construction stage of the questionnaires provided good evidence of face and content validity. Comparisons between the P-HIPS and the C-HIPS, and the P-HINAS and the C-HINAS indicated high levels of agreement suggesting that in situations where the patient is unable to provide self-reports, information provided by the carer could be used.

  7. A cache-aided multiprocessor rollback recovery scheme

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent

    1989-01-01

    This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.

  8. Real-time PCR machine system modeling and a systematic approach for the robust design of a real-time PCR-on-a-chip system.

    PubMed

    Lee, Da-Sheng

    2010-01-01

    Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design.

  9. On-clip high frequency reliability and failure test structures

    DOEpatents

    Snyder, E.S.; Campbell, D.V.

    1997-04-29

    Self-stressing test structures for realistic high frequency reliability characterizations. An on-chip high frequency oscillator, controlled by DC signals from off-chip, provides a range of high frequency pulses to test structures. The test structures provide information with regard to a variety of reliability failure mechanisms, including hot-carriers, electromigration, and oxide breakdown. The system is normally integrated at the wafer level to predict the failure mechanisms of the production integrated circuits on the same wafer. 22 figs.

  10. Combating the Reliability Challenge of GPU Register File at Low Supply Voltage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Jingweijia; Song, Shuaiwen; Yan, Kaige

    Supply voltage reduction is an effective approach to significantly reduce GPU energy consumption. As the largest on-chip storage structure, the GPU register file becomes the reliability hotspot that prevents further supply voltage reduction below the safe limit (Vmin) due to process variation effects. This work addresses the reliability challenge of the GPU register file at low supply voltages, which is an essential first step for aggressive supply voltage reduction of the entire GPU chip. We propose GR-Guard, an architectural solution that leverages long register dead time to enable reliable operations from unreliable register file at low voltages.

  11. State recovery and lockstep execution restart in a system with multiprocessor pairing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less

  12. From LPF to eLISA: new approach in payload software

    NASA Astrophysics Data System (ADS)

    Gesa, Ll.; Martin, V.; Conchillo, A.; Ortega, J. A.; Mateos, I.; Torrents, A.; Lopez-Zaragoza, J. P.; Rivas, F.; Lloro, I.; Nofrarias, M.; Sopuerta, CF.

    2017-05-01

    eLISA will be the first observatory in space to explore the Gravitational Universe. It will gather revolutionary information about the dark universe. This implies a robust and reliable embedded control software and hardware working together. With the lessons learnt with the LISA Pathfinder payload software as baseline, we will introduce in this short article the key concepts and new approaches that our group is working on in terms of software: multiprocessor, self-modifying-code strategies, 100% hardware and software monitoring, embedded scripting, Time and Space Partition among others.

  13. CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy

    2006-01-01

    This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.

  14. Thermal Hotspots in CPU Die and It's Future Architecture

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Hu, Fu-Yuan

    Owing to the increasing core frequency and chip integration and the limited die dimension, the power densities in CPU chip have been increasing fastly. The high temperature on chip resulted by power densities threats the processor's performance and chip's reliability. This paper analyzed the thermal hotspots in die and their properties. A new architecture of function units in die - - hot units distributed architecture is suggested to cope with the problems of high power densities for future processor chip.

  15. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  16. Nuclear Science Symposium, 31st and Symposium on Nuclear Power Systems, 16th, Orlando, FL, October 31-November 2, 1984, Proceedings

    NASA Technical Reports Server (NTRS)

    Biggerstaff, J. A. (Editor)

    1985-01-01

    Topics related to physics instrumentation are discussed, taking into account cryostat and electronic development associated with multidetector spectrometer systems, the influence of materials and counting-rate effects on He-3 neutron spectrometry, a data acquisition system for time-resolved muscle experiments, and a sensitive null detector for precise measurements of integral linearity. Other subjects explored are concerned with space instrumentation, computer applications, detectors, instrumentation for high energy physics, instrumentation for nuclear medicine, environmental monitoring and health physics instrumentation, nuclear safeguards and reactor instrumentation, and a 1984 symposium on nuclear power systems. Attention is given to the application of multiprocessors to scientific problems, a large-scale computer facility for computational aerodynamics, a single-board 32-bit computer for the Fastbus, the integration of detector arrays and readout electronics on a single chip, and three-dimensional Monte Carlo simulation of the electron avalanche in a proportional counter.

  17. Delamination study of chip-to-chip bonding for a LIGA-based safety and arming system

    NASA Astrophysics Data System (ADS)

    Subramanian, Gowrishankar; Deeds, Michael; Cochran, Kevin R.; Raghavan, Raghu; Sandborn, Peter A.

    1999-08-01

    The development of a miniature underwater weapon safety and arming system requires reliable chip-to-chip bonding of die that contain microelectromechanical actuators and sensors fabricated using a LIGA MEMS fabrication process. Chip-to- chip bonding is associated for several different bond materials (indium solder, thermoplastic paste, thermoplastic film and epoxy film), and bonding configurations (with an alloy 42 spacer, silicon to ceramic, and silicon to silicon). Metrology using acoustic micro imaging has been developed to determine the fraction of delamination of samples.

  18. NEPP Evaluation of Automotive Grade Tantalum Chip Capacitors

    NASA Technical Reports Server (NTRS)

    Sampson, Mike; Brusse, Jay

    2018-01-01

    Automotive grade tantalum (Ta) chip capacitors are available at lower cost with smaller physical size and higher volumetric efficiency compared to military/space grade capacitors. Designers of high reliability aerospace and military systems would like to take advantage of these attributes while maintaining the high standards for long-term reliable operation they are accustomed to when selecting military-qualified established reliability tantalum chip capacitors (e.g., MIL-PRF-55365). The objective for this evaluation was to assess the long-term performance of off-the-shelf automotive grade Ta chip capacitors (i.e., manufacturer self-qualified per AEC Q-200). Two (2) lots of case size D manganese dioxide (MnO2) cathode Ta chip capacitors from 1 manufacturer were evaluated. The evaluation consisted of construction analysis, basic electrical parameter characterization, extended long-term (2000 hours) life testing and some accelerated stress testing. Tests and acceptance criteria were based upon manufacturer datasheets and the Automotive Electronics Council's AEC Q-200 qualification specification for passive electronic components. As-received a few capacitors were marginally above the specified tolerance for capacitance and ESR. X-ray inspection found that the anodes for some devices may not be properly aligned within the molded encapsulation leaving less than 1 mil thickness of the encapsulation. This evaluation found that the long-term life performance of automotive grade Ta chip capacitors is generally within specification limits suggesting these capacitors may be suitable for some space applications.

  19. Optimal use of tandem biotin and V5 tags in ChIP assays

    PubMed Central

    Kolodziej, Katarzyna E; Pourfarzad, Farzin; de Boer, Ernie; Krpic, Sanja; Grosveld, Frank; Strouboulis, John

    2009-01-01

    Background Chromatin immunoprecipitation (ChIP) assays coupled to genome arrays (Chip-on-chip) or massive parallel sequencing (ChIP-seq) lead to the genome wide identification of binding sites of chromatin associated proteins. However, the highly variable quality of antibodies and the availability of epitopes in crosslinked chromatin can compromise genomic ChIP outcomes. Epitope tags have often been used as more reliable alternatives. In addition, we have employed protein in vivo biotinylation tagging as a very high affinity alternative to antibodies. In this paper we describe the optimization of biotinylation tagging for ChIP and its coupling to a known epitope tag in providing a reliable and efficient alternative to antibodies. Results Using the biotin tagged erythroid transcription factor GATA-1 as example, we describe several optimization steps for the application of the high affinity biotin streptavidin system in ChIP. We find that the omission of SDS during sonication, the use of fish skin gelatin as blocking agent and choice of streptavidin beads can lead to significantly improved ChIP enrichments and lower background compared to antibodies. We also show that the V5 epitope tag performs equally well under the conditions worked out for streptavidin ChIP and that it may suffer less from the effects of formaldehyde crosslinking. Conclusion The combined use of the very high affinity biotin tag with the less sensitive to crosslinking V5 tag provides for a flexible ChIP platform with potential implications in ChIP sequencing outcomes. PMID:19196479

  20. Parallel and fault-tolerant algorithms for hypercube multiprocessors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykanat, C.

    1988-01-01

    Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less

  1. Development and psychometric properties of the Carer – Head Injury Neurobehavioral Assessment Scale (C-HINAS) and the Carer – Head Injury Participation Scale (C-HIPS): patient and family determined outcome scales

    PubMed Central

    Deb, Shoumitro; Bryant, Eleanor; Morris, Paul G; Prior, Lindsay; Lewis, Glyn; Haque, Sayeed

    2007-01-01

    Objective Develop and assess the psychometric properties of the Carer – Head Injury Participation Scale (C-HIPS) and its biggest factor the Carer – Head Injury Neurobehavioral Assessment Scale (C-HINAS). Furthermore, the aim was to examine the inter-informant reliability by comparing the self reports of individuals with traumatic brain injury (TBI) with the carer reports on the C-HIPS and the C-HINAS. Method Thirty-two TBI individuals and 27 carers took part in in-depth qualitative interviews exploring the consequences of the TBI. Interview transcripts were analysed and key themes and concepts were used to construct a 49-item and 58-item patient (Patient – Head Injury Participation Scale [P-HIPS]) and carer outcome measure (C-HIPS) respectively, of which 49 were parallel items and nine additional items were used to assess carer burden. Postal versions of the P-HIPS, C-HIPS, Mayo Portland Adaptability Inventory-3 (MPAI-3), and the Glasgow Outcome Scale-Extended (GOSE) were completed by a cohort of 113 TBI individuals and 80 carers. Data from a sub-group of 66 patient/carer pairs were used to compare inter-informant reliability between the P-HIPS and the C-HIPS, and the P-HINAS and the C-HINAS respectively. Results All individual 49 items of the C-HIPS and their total score showed good test-retest reliability (0.95) and internal consistency (0.95). Comparisons with the MPAI-3 and GOSE found a good correlation with the MPAI-3 (0.7) and a moderate negative correlation with the GOSE (−0.6). Factor analysis of these items extracted a 4-factor structure which represented the domains ‘Emotion/Behavior’ (C-HINAS), ‘Independence/Community Living’, ‘Cognition’, and ‘Physical’. The C-HINAS showed good internal consistency (0.92), test-retest reliability (0.93), and concurrent validity with one MPAI subscale (0.7). Assessment of inter-informant reliability revealed good correspondence between the reports of the patients and the carers for both the C-HIPS (0.83) and the C-HINAS (0.82). Conclusion Both the C-HINAS and the C-HIPS show strong psychometric properties. The qualitative methodology employed in the construction stage of the questionnaires provided good evidence of face and content validity. Comparisons between the P-HIPS and the C-HIPS, and the P-HINAS and the C-HINAS indicated high levels of agreement suggesting that in situations where the patient is unable to provide self-reports, information provided by the carer could be used. PMID:19300569

  2. [Test of thermal deformation for electronic devices of high thermal reliability].

    PubMed

    Li, Hai-yuan; Li, Bao-ming

    2002-06-01

    Thermal deformation can be caused by high partial heat flux and greatly reduce thermal reliability of electronic devices. In this paper, an attempt is made to measure the thermal deformation of high power electronic devices under working condition using laser holographic interferometry with double exposure. Laser holographic interferometry is an untouched measurement with measurement precision up to micron dimension. The electronic device chosen for measurement is a type of solid state relay which is used for ignition of rockets. The output circuit of the solid state relay is made up of a MOSFET chip and the power density of the chip can reach high value. In particular situations thermal deformation and stress may significantly influence working performance of the solid state relay. The bulk deformation of the chip and its mount is estimated by number of interferential stripes on chip surface. While thermal stress and deformation can be estimated by curvature of interferential stripes on chip surface. Experimental results indicate that there are more interferential stripes on chip surface and greater flexural degree of stripes under high power. Therefore, these results reflect large out-of-plain displacement and deformed size of the chip with the increase of load current.

  3. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 1: FTMP principles of operation

    NASA Technical Reports Server (NTRS)

    Smith, T. B., Jr.; Lala, J. H.

    1983-01-01

    The basic organization of the fault tolerant multiprocessor, (FTMP) is that of a general purpose homogeneous multiprocessor. Three processors operate on a shared system (memory and I/O) bus. Replication and tight synchronization of all elements and hardware voting is employed to detect and correct any single fault. Reconfiguration is then employed to repair a fault. Multiple faults may be tolerated as a sequence of single faults with repair between fault occurrences.

  4. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  5. Real-time PCR Machine System Modeling and a Systematic Approach for the Robust Design of a Real-time PCR-on-a-Chip System

    PubMed Central

    Lee, Da-Sheng

    2010-01-01

    Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design. PMID:22315563

  6. Design of Water Temperature Control System Based on Single Chip Microcomputer

    NASA Astrophysics Data System (ADS)

    Tan, Hanhong; Yan, Qiyan

    2017-12-01

    In this paper, we mainly introduce a multi-function water temperature controller designed with 51 single-chip microcomputer. This controller has automatic and manual water, set the water temperature, real-time display of water and temperature and alarm function, and has a simple structure, high reliability, low cost. The current water temperature controller on the market basically use bimetal temperature control, temperature control accuracy is low, poor reliability, a single function. With the development of microelectronics technology, monolithic microprocessor function is increasing, the price is low, in all aspects of widely used. In the water temperature controller in the application of single-chip, with a simple design, high reliability, easy to expand the advantages of the function. Is based on the appeal background, so this paper focuses on the temperature controller in the intelligent control of the discussion.

  7. ATAMM enhancement and multiprocessor performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamoy; Obando, Rodrigo; Malekpour, Mahyar R.; Jones, Robert L., III; Mandala, Brij Mohan V.

    1991-01-01

    ATAMM (Algorithm To Architecture Mapping Model) enhancement and multiprocessor performance evaluation is discussed. The following topics are included: the ATAMM model; ATAMM enhancement; ADM (Advanced Development Model) implementation of ATAMM; and ATAMM support tools.

  8. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  9. Multiprocessor smalltalk: Implementation, performance, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallas, J.I.

    1990-01-01

    Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possiblemore » to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.« less

  10. A fault-tolerant information processing concept for space vehicles.

    NASA Technical Reports Server (NTRS)

    Hopkins, A. L., Jr.

    1971-01-01

    A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.

  11. Cache-based error recovery for shared memory multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.

    1989-01-01

    A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.

  12. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  13. Mapping of H.264 decoding on a multiprocessor architecture

    NASA Astrophysics Data System (ADS)

    van der Tol, Erik B.; Jaspers, Egbert G.; Gelderblom, Rob H.

    2003-05-01

    Due to the increasing significance of development costs in the competitive domain of high-volume consumer electronics, generic solutions are required to enable reuse of the design effort and to increase the potential market volume. As a result from this, Systems-on-Chip (SoCs) contain a growing amount of fully programmable media processing devices as opposed to application-specific systems, which offered the most attractive solutions due to a high performance density. The following motivates this trend. First, SoCs are increasingly dominated by their communication infrastructure and embedded memory, thereby making the cost of the functional units less significant. Moreover, the continuously growing design costs require generic solutions that can be applied over a broad product range. Hence, powerful programmable SoCs are becoming increasingly attractive. However, to enable power-efficient designs, that are also scalable over the advancing VLSI technology, parallelism should be fully exploited. Both task-level and instruction-level parallelism can be provided by means of e.g. a VLIW multiprocessor architecture. To provide the above-mentioned scalability, we propose to partition the data over the processors, instead of traditional functional partitioning. An advantage of this approach is the inherent locality of data, which is extremely important for communication-efficient software implementations. Consequently, a software implementation is discussed, enabling e.g. SD resolution H.264 decoding with a two-processor architecture, whereas High-Definition (HD) decoding can be achieved with an eight-processor system, executing the same software. Experimental results show that the data communication considerably reduces up to 65% directly improving the overall performance. Apart from considerable improvement in memory bandwidth, this novel concept of partitioning offers a natural approach for optimally balancing the load of all processors, thereby further improving the overall speedup.

  14. Two-dimensional systolic-array architecture for pixel-level vision tasks

    NASA Astrophysics Data System (ADS)

    Vijverberg, Julien A.; de With, Peter H. N.

    2010-05-01

    This paper presents ongoing work on the design of a two-dimensional (2D) systolic array for image processing. This component is designed to operate on a multi-processor system-on-chip. In contrast with other 2D systolic-array architectures and many other hardware accelerators, we investigate the applicability of executing multiple tasks in a time-interleaved fashion on the Systolic Array (SA). This leads to a lower external memory bandwidth and better load balancing of the tasks on the different processing tiles. To enable the interleaving of tasks, we add a shadow-state register for fast task switching. To reduce the number of accesses to the external memory, we propose to share the communication assist between consecutive tasks. A preliminary, non-functional version of the SA has been synthesized for an XV4S25 FPGA device and yields a maximum clock frequency of 150 MHz requiring 1,447 slices and 5 memory blocks. Mapping tasks from video content-analysis applications from literature on the SA yields reductions in the execution time of 1-2 orders of magnitude compared to the software implementation. We conclude that the choice for an SA architecture is useful, but a scaled version of the SA featuring less logic with fewer processing and pipeline stages yielding a lower clock frequency, would be sufficient for a video analysis system-on-chip.

  15. Chip-scale thermal management of high-brightness LED packages

    NASA Astrophysics Data System (ADS)

    Arik, Mehmet; Weaver, Stanton

    2004-10-01

    The efficiency and reliability of the solid-state lighting devices strongly depend on successful thermal management. Light emitting diodes, LEDs, are a strong candidate for the next generation, general illumination applications. LEDs are making great strides in terms of lumen performance and reliability, however the barrier to widespread use in general illumination still remains the cost or $/Lumen. LED packaging designers are pushing the LED performance to its limits. This is resulting in increased drive currents, and thus the need for lower thermal resistance packaging designs. As the power density continues to rise, the integrity of the package electrical and thermal interconnect becomes extremely important. Experimental results with high brightness LED packages show that chip attachment defects can cause significant thermal gradients across the LED chips leading to premature failures. A numerical study was also carried out with parametric models to understand the chip active layer temperature profile variation due to the bump defects. Finite element techniques were utilized to evaluate the effects of localized hot spots at the chip active layer. The importance of "zero defects" in one of the more popular interconnect schemes; the "epi down" soldered flip chip configuration is investigated and demonstrated.

  16. Design and qualification of the SEU/TD Radiation Monitor chip

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Blaes, Brent R.; Soli, George A.; Zamani, Nasser; Hicks, Kenneth A.

    1992-01-01

    This report describes the design, fabrication, and testing of the Single-Event Upset/Total Dose (SEU/TD) Radiation Monitor chip. The Radiation Monitor is scheduled to fly on the Mid-Course Space Experiment Satellite (MSX). The Radiation Monitor chip consists of a custom-designed 4-bit SRAM for heavy ion detection and three MOSFET's for monitoring total dose. In addition the Radiation Monitor chip was tested along with three diagnostic chips: the processor monitor and the reliability and fault chips. These chips revealed the quality of the CMOS fabrication process. The SEU/TD Radiation Monitor chip had an initial functional yield of 94.6 percent. Forty-three (43) SEU SRAM's and 14 Total Dose MOSFET's passed the hermeticity and final electrical tests and were delivered to LL.

  17. A miniature on-chip multi-functional ECG signal processor with 30 µW ultra-low power consumption.

    PubMed

    Liu, Xin; Zheng, Yuan Jin; Phyu, Myint Wai; Zhao, Bin; Je, Minkyu; Yuan, Xiao Jun

    2010-01-01

    In this paper, a miniature low-power Electrocardiogram (ECG) signal processing application specific integrated circuit (ASIC) chip is proposed. This chip provides multiple critical functions for ECG analysis using a systematic wavelet transform algorithm and a novel SRAM-based ASIC architecture, while achieves low cost and high performance. Using 0.18 µm CMOS technology and 1 V power supply, this ASIC chip consumes only 29 µW and occupies an area of 3 mm(2). This on-chip ECG processor is highly suitable for reliable real-time cardiac status monitoring applications.

  18. Development of Equivalent Material Properties of Microbump for Simulating Chip Stacking Packaging

    PubMed Central

    Lee, Chang-Chun; Tzeng, Tzai-Liang; Huang, Pei-Chen

    2015-01-01

    A three-dimensional integrated circuit (3D-IC) structure with a significant scale mismatch causes difficulty in analytic model construction. This paper proposes a simulation technique to introduce an equivalent material composed of microbumps and their surrounding wafer level underfill (WLUF). The mechanical properties of this equivalent material, including Young’s modulus (E), Poisson’s ratio, shear modulus, and coefficient of thermal expansion (CTE), are directly obtained by applying either a tensile load or a constant displacement, and by increasing the temperature during simulations, respectively. Analytic results indicate that at least eight microbumps at the outermost region of the chip stacking structure need to be considered as an accurate stress/strain contour in the concerned region. In addition, a factorial experimental design with analysis of variance is proposed to optimize chip stacking structure reliability with four factors: chip thickness, substrate thickness, CTE, and E-value. Analytic results show that the most significant factor is CTE of WLUF. This factor affects microbump reliability and structural warpage under a temperature cycling load and high-temperature bonding process. WLUF with low CTE and high E-value are recommended to enhance the assembly reliability of the 3D-IC architecture. PMID:28793495

  19. Preliminary basic performance analysis of the Cedar multiprocessor memory system

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Turner, S.; Veidenbaum, A.; Wijshoff, H.

    1991-01-01

    Some preliminary basic results on the performance of the Cedar multiprocessor memory system are presented. Empirical results are presented and used to calibrate a memory system simulator which is then used to discuss the scalability of the system.

  20. Real-Time Multiprocessor Programming Language (RTMPL) user's manual

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.

    1985-01-01

    A real-time multiprocessor programming language (RTMPL) has been developed to provide for high-order programming of real-time simulations on systems of distributed computers. RTMPL is a structured, engineering-oriented language. The RTMPL utility supports a variety of multiprocessor configurations and types by generating assembly language programs according to user-specified targeting information. Many programming functions are assumed by the utility (e.g., data transfer and scaling) to reduce the programming chore. This manual describes RTMPL from a user's viewpoint. Source generation, applications, utility operation, and utility output are detailed. An example simulation is generated to illustrate many RTMPL features.

  1. Techniques for video compression

    NASA Technical Reports Server (NTRS)

    Wu, Chwan-Hwa

    1995-01-01

    In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.

  2. Spaceborne VHSIC multiprocessor system for AI applications

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.; Shrobe, Howard E.; Aspinall, John G.

    1988-01-01

    A multiprocessor system, under design for space-station applications, makes use of the latest generation symbolic processor and packaging technology. The result will be a compact, space-qualified system two to three orders of magnitude more powerful than present-day symbolic processing systems.

  3. Conceptual design of a 10 to the 8th power bit magnetic bubble domain mass storage unit and fabrication, test and delivery of a feasibility model

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The conceptual design of a highly reliable 10 to the 8th power-bit bubble domain memory for the space program is described. The memory has random access to blocks of closed-loop shift registers, and utilizes self-contained bubble domain chips with on-chip decoding. Trade-off studies show that the highest reliability and lowest power dissipation is obtained when the memory is organized on a bit-per-chip basis. The final design has 800 bits/register, 128 registers/chip, 16 chips/plane, and 112 planes, of which only seven are activated at a time. A word has 64 data bits +32 checkbits, used in a 16-adjacent code to provide correction of any combination of errors in one plane. 100 KHz maximum rotational frequency keeps power low (equal to or less than, 25 watts) and also allows asynchronous operation. Data rate is 6.4 megabits/sec, access time is 200 msec to an 800-word block and an additional 4 msec (average) to a word. The fabrication and operation are also described for a 64-bit bubble domain memory chip designed to test the concept of on-chip magnetic decoding. Access to one of the chip's four shift registers for the read, write, and clear functions is by means of bubble domain decoders utilizing the interaction between a conductor line and a bubble.

  4. Tagging of Test Tubes with Electronic p-Chips for Use in Biorepositories.

    PubMed

    Mandecki, Wlodek; Kopacka, Wesley M; Qian, Ziye; Ertwine, Von; Gedzberg, Katie; Gruda, Maryann; Reinhardt, David; Rodriguez, Efrain

    2017-08-01

    A system has been developed to electronically tag and track test tubes used in biorepositories. The system is based on a light-activated microtransponder, also known as a "p-Chip." One of the pressing problems with storing and retrieving biological samples at low temperatures is the difficulty of reliably reading the identification (ID) number that links each storage tube with the database containing sample details. Commonly used barcodes are not always reliable at low temperatures because of poor adhesion of the label to the test tube and problems with reading under conditions of frost and ice accumulation. Traditional radio frequency identification (RFID) tags are not cost effective and are too large for this application. The system described herein consists of the p-Chip, p-Chip-tagged test tubes, two ID readers (for single tubes or for racks of tubes), and software. We also describe a robot that is configured for retrofitting legacy test tubes in biorepositories with p-Chips while maintaining the temperature of the sample below -50°C at all times. The main benefits of the p-Chip over other RFID devices are its small size (600 × 600 × 100 μm) that allows even very small tubes or vials to be tagged, low cost due to the chip's unitary construction, durability, and the ability to read the ID through frost and ice.

  5. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  6. Multipurpose silicon photonics signal processor core.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José

    2017-09-21

    Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.

  7. Thermal cycling test results of CSP and RF assemblies

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.; Nelson, G.; Cooper, M.; Lam, D.; Strudler, S.; Umdekar, A.; Selk, K.; Bjorndahl, B.; Duprey, R.

    2000-01-01

    A JPL-led chip scale package (CSP) Consortium of enterprises, composed of representing agencies and private companies, recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.

  8. A general model for memory interference in a multiprocessor system with memory hierarchy

    NASA Technical Reports Server (NTRS)

    Taha, Badie A.; Standley, Hilda M.

    1989-01-01

    The problem of memory interference in a multiprocessor system with a hierarchy of shared buses and memories is addressed. The behavior of the processors is represented by a sequence of memory requests with each followed by a determined amount of processing time. A statistical queuing network model for determining the extent of memory interference in multiprocessor systems with clusters of memory hierarchies is presented. The performance of the system is measured by the expected number of busy memory clusters. The results of the analytic model are compared with simulation results, and the correlation between them is found to be very high.

  9. Experimental evaluation of multiprocessor cache-based error recovery

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. K.

    1991-01-01

    Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.

  10. Universal nondestructive mm-wave integrated circuit test fixture

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R. (Inventor); Shalkhauser, Kurt A. (Inventor)

    1990-01-01

    Monolithic microwave integrated circuit (MMIC) test includes a bias module having spring-loaded contacts which electrically engage pads on a chip carrier disposed in a recess of a base member. RF energy is applied to and passed from the chip carrier by chamfered edges of ridges in the waveguide passages of housings which are removably attached to the base member. Thru, Delay, and Short calibration standards having dimensions identical to those of the chip carrier assure accuracy and reliability of the test. The MMIC chip fits in an opening in the chip carrier with the boundaries of the MMIC lying on movable reference planes thereby establishing accuracy and flexibility.

  11. Optimization of multiplexed PCR on an integrated microfluidic forensic platform for rapid DNA analysis.

    PubMed

    Estes, Matthew D; Yang, Jianing; Duane, Brett; Smith, Stan; Brooks, Carla; Nordquist, Alan; Zenhausern, Frederic

    2012-12-07

    This study reports the design, prototyping, and assay development of multiplexed polymerase chain reaction (PCR) on a plastic microfluidic device. Amplification of 17 DNA loci is carried out directly on-chip as part of a system for continuous workflow processing from sample preparation (SP) to capillary electrophoresis (CE). For enhanced performance of on-chip PCR amplification, improved control systems have been developed making use of customized Peltier assemblies, valve actuators, software, and amplification chemistry protocols. Multiple enhancements to the microfluidic chip design have been enacted to improve the reliability of sample delivery through the various on-chip modules. This work has been enabled by the encapsulation of PCR reagents into a solid phase material through an optimized Solid Phase Encapsulating Assay Mix (SPEAM) bead-based hydrogel fabrication process. SPEAM bead technology is reliably coupled with precise microfluidic metering and dispensing for efficient amplification and subsequent DNA short tandem repeat (STR) fragment analysis. This provides a means of on-chip reagent storage suitable for microfluidic automation, with the long shelf-life necessary for point-of-care (POC) or field deployable applications. This paper reports the first high quality 17-plex forensic STR amplification from a reference sample in a microfluidic chip with preloaded solid phase reagents, that is designed for integration with up and downstream processing.

  12. Architecture for VLSI design of Reed-Solomon encoders

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1981-01-01

    The logic structure of a universal VLSI chip called the symbol-slice Reed-Solomon (RS) encoder chip is discussed. An RS encoder can be constructed by cascading and properly interconnecting a group of such VLSI chips. As a design example, it is shown that a (255,223) RD encoder requiring around 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical interconnected VLSI RS encoder chips. Besides the size advantage, the VLSI RS encoder also has the potential advantages of requiring less power and having a higher reliability.

  13. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Weatherbee, J. E.; Taylor, D. S.

    1972-01-01

    A deterministic digital simulation model is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Use of the model as a tool in configuring a minimum computer system for a typical mission is demonstrated. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources, i.e., the configuration derived is a minimal one. Other considerations such as increased reliability through the use of standby spares would be taken into account in the definition of a practical system for a given mission.

  14. Closed-form solutions of performability. [in computer systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    It is noted that if computing system performance is degradable then system evaluation must deal simultaneously with aspects of both performance and reliability. One approach is the evaluation of a system's performability which, relative to a specified performance variable Y, generally requires solution of the probability distribution function of Y. The feasibility of closed-form solutions of performability when Y is continuous are examined. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. Employing an approximate decomposition of the model, it is shown that a closed-form solution can indeed be obtained.

  15. Investigation of an advanced fault tolerant integrated avionics system

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.; Cottrell, D.; Flanders, J.; Javornik, A.; Rusovick, M.

    1986-01-01

    Presented is an advanced, fault-tolerant multiprocessor avionics architecture as could be employed in an advanced rotorcraft such as LHX. The processor structure is designed to interface with existing digital avionics systems and concepts including the Army Digital Avionics System (ADAS) cockpit/display system, navaid and communications suites, integrated sensing suite, and the Advanced Digital Optical Control System (ADOCS). The report defines mission, maintenance and safety-of-flight reliability goals as might be expected for an operational LHX aircraft. Based on use of a modular, compact (16-bit) microprocessor card family, results of a preliminary study examining simplex, dual and standby-sparing architectures is presented. Given the stated constraints, it is shown that the dual architecture is best suited to meet reliability goals with minimum hardware and software overhead. The report presents hardware and software design considerations for realizing the architecture including redundancy management requirements and techniques as well as verification and validation needs and methods.

  16. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  17. Polymorphous Computing Architectures

    DTIC Science & Technology

    2007-12-12

    provide a multiprocessor implementation. In this work, we introduce the Atomos transactional programming language, which is the first to include...implicit transactions, strong atomicity, and a scalable multiprocessor implementation [47]. Atomos is derived from Java, but replaces its synchronization...and conditional waiting constructs with transactional alternatives. The Atomos conditional waiting proposal is tailored to allow efficient

  18. Reliability Assessment of Advanced Flip-clip Interconnect Electronic Package Assemblies under Extreme Cold Temperatures (-190 and -120 C)

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni; Ghaffarian, Reza; Shapiro, Andrew; Napala, Phil A.; Martin, Patrick A.

    2005-01-01

    Flip-chip interconnect electronic package boards have been assembled, underfilled, non-destructively evaluated and subsequently subjected to extreme temperature thermal cycling to assess the reliability of this advanced packaging interconnect technology for future deep space, long-term, extreme temperature missions. In this very preliminary study, the employed temperature range covers military specifications (-55 C to 100 C), extreme cold Martian (-120 C to 115 C) and asteroid Nereus (-180 C to 25 C) environments. The resistance of daisy-chained, flip-chip interconnects were measured at room temperature and at various intervals as a function of extreme temperature thermal cycling. Electrical resistance measurements are reported and the tests to date have not shown significant change in resistance as a function of extreme temperature thermal cycling. However, the change in interconnect resistance becomes more noticeable with increasing number of thermal cycles. Further research work has been carried out to understand the reliability of flip-chip interconnect packages under extreme temperature applications (-190 C to 85 C) via continuously monitoring the daisy chain resistance. Adaptation of suitable diagnostic techniques to identify the failure mechanisms is in progress. This presentation will describe the experimental test results of flip-chip testing under extreme temperatures.

  19. Thermal cycling reliability of Cu/SnAg double-bump flip chip assemblies for 100 μm pitch applications

    NASA Astrophysics Data System (ADS)

    Son, Ho-Young; Kim, Ilho; Lee, Soon-Bok; Jung, Gi-Jo; Park, Byung-Jin; Paik, Kyung-Wook

    2009-01-01

    A thick Cu column based double-bump flip chip structure is one of the promising alternatives for fine pitch flip chip applications. In this study, the thermal cycling (T/C) reliability of Cu/SnAg double-bump flip chip assemblies was investigated, and the failure mechanism was analyzed through the correlation of T/C test and the finite element analysis (FEA) results. After 1000 thermal cycles, T/C failures occurred at some Cu/SnAg bumps located at the edge and corner of chips. Scanning acoustic microscope analysis and scanning electron microscope observations indicated that the failure site was the Cu column/Si chip interface. It was identified by a FEA where the maximum stress concentration was located during T/C. During T/C, the Al pad between the Si chip and a Cu column bump was displaced due to thermomechanical stress. Based on the low cycle fatigue model, the accumulation of equivalent plastic strain resulted in thermal fatigue deformation of the Cu column bumps and ultimately reduced the thermal cycling lifetime. The maximum equivalent plastic strains of some bumps at the chip edge increased with an increased number of thermal cycles. However, equivalent plastic strains of the inner bumps did not increase regardless of the number of thermal cycles. In addition, the z-directional normal plastic strain ɛ22 was determined to be compressive and was a dominant component causing the plastic deformation of Cu/SnAg double bumps. As the number of thermal cycles increased, normal plastic strains in the perpendicular direction to the Si chip and shear strains were accumulated on the Cu column bumps at the chip edge at low temperature region. Thus it was found that the Al pad at the Si chip/Cu column interface underwent thermal fatigue deformation by compressive normal strain and the contact loss by displacement failure of the Al pad, the main T/C failure mode of the Cu/SnAg flip chip assembly, then occurred at the Si chip/Cu column interface shear strain deformation during T/C.

  20. Silicon photonics for high-performance interconnection networks

    NASA Astrophysics Data System (ADS)

    Biberman, Aleksandr

    2011-12-01

    We assert in the course of this work that silicon photonics has the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems, and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. This work showcases that chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, enable unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of this work, we demonstrate such feasibility of waveguides, modulators, switches, and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. Furthermore, we leverage the unique properties of available silicon photonic materials to create novel silicon photonic devices, subsystems, network topologies, and architectures to enable unprecedented performance of these photonic interconnection networks and computing systems. We show that the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. Furthermore, we explore the immense potential of all-optical functionalities implemented using parametric processing in the silicon platform, demonstrating unique methods that have the ability to revolutionize computation and communication. Silicon photonics enables new sets of opportunities that we can leverage for performance gains, as well as new sets of challenges that we must solve. Leveraging its inherent compatibility with standard fabrication techniques of the semiconductor industry, combined with its capability of dense integration with advanced microelectronics, silicon photonics also offers a clear path toward commercialization through low-cost mass-volume production. Combining empirical validations of feasibility, demonstrations of massive performance gains in large-scale systems, and the potential for commercial penetration of silicon photonics, the impact of this work will become evident in the many decades that follow.

  1. Static Scheduler for Hard Real-Time Tasks on Multiprocessor Systems

    DTIC Science & Technology

    1992-09-01

    Foundation of Computer Science, 1980 . [SIM83] Simons, B., "Multiprocessor Scheduling of Unit-Time Jobs with Arbitrary Release Times and Deadlines", SIAM...Research Office Attn: Dr. David Hislop P. O. Box 12211 Research Triangle Park, NC 27709-2211 31. Persistent Data Systems 75 W. Chapel Ridge Road Attn: Dr

  2. Single-Chip Microcomputer Control Of The PWM Inverter

    NASA Astrophysics Data System (ADS)

    Morimoto, Masayuki; Sato, Shinji; Sumito, Kiyotaka; Oshitani, Katsumi

    1987-10-01

    A single-chip microcomputer-based con-troller for a pulsewidth modulated 1.7 KVA inverter of an airconditioner is presented. The PWM pattern generation and the system control of the airconditioner are achieved by software of the 8-bit single-chip micro-computer. The single-chip microcomputer has the disadvantages of low processing speed and small memory capacity which can be overcome by the magnetic flux control method. The PWM pattern is generated every 90 psec. The memory capacity of the PWM look-up table is less than 2 kbytes. The simple and reliable control is realized by the software-based implementation.

  3. Flip Chip on Organic Substrates: A Feasibility Study for Space Applications

    DTIC Science & Technology

    2017-03-01

    scheme, a 1752 I/O land grid array (LGA) package with decoupling capacitors, heat sink and optional column attach [1] as shown in Figure 1...investigated the effect of moisture and current loading on the Class Y flip chip on ceramic reliability [ 2 ]. The UT1752FC Class Y technology has...chip assembly to ceramic test substrates, the FA10 die are assembled to build-up organic test substrates as shown in Figure 2 . These assemblies

  4. Vehicle security encryption based on unlicensed encryption

    NASA Astrophysics Data System (ADS)

    Huang, Haomin; Song, Jing; Xu, Zhijia; Ding, Xiaoke; Deng, Wei

    2018-03-01

    The current vehicle key is easy to be destroyed and damage, proposing the use of elliptical encryption algorithm is improving the reliability of vehicle security system. Based on the encryption rules of elliptic curve, the chip's framework and hardware structure are designed, then the chip calculation process simulation has been analyzed by software. The simulation has been achieved the expected target. Finally, some issues pointed out in the data calculation about the chip's storage control and other modules.

  5. Pre-clinical evaluation of OxyChip for long-term EPR oximetry.

    PubMed

    Hou, Huagang; Khan, Nadeem; Gohain, Sangeeta; Kuppusamy, M Lakshmi; Kuppusamy, Periannan

    2018-03-16

    Tissue oxygenation is a critical parameter in various pathophysiological situations including cardiovascular disease and cancer. Hypoxia can significantly influence the prognosis of solid malignancies and the efficacy of their treatment by radiation or chemotherapy. Electron paramagnetic resonance (EPR) oximetry is a reliable method for repeatedly assessing and monitoring oxygen levels in tissues. Lithium octa-n-butoxynaphthalocyanine (LiNc-BuO) has been developed as a probe for biological EPR oximetry, especially for clinical use. However, clinical applicability of LiNc-BuO crystals is hampered by potential limitations associated with biocompatibility, biodegradation, or migration of individual bare crystals in tissue. To overcome these limitations, we have embedded LiNc-BuO crystals in polydimethylsiloxane (PDMS), an oxygen-permeable biocompatible polymer and developed an implantable/retrievable form of chip, called OxyChip. The chip was optimized for maximum spin density (40% w/w of LiNc-BuO in PDMS) and fabricated in a form suitable for implantation using an 18-G syringe needle. In vitro evaluation of the OxyChip showed that it is robust and highly oxygen sensitive. The dependence of its EPR linewidth to oxygen was linear and highly reproducible. In vivo efficacy of the OxyChip was evaluated by implanting it in rat femoris muscle and following its response to tissue oxygenation for up to 12 months. The results revealed preservation of the integrity (size and shape) and calibration (oxygen sensitivity) of the OxyChip throughout the implantation period. Further, no inflammatory or adverse reaction around the implantation area was observed thereby establishing its biocompatibility and safety. Overall, the results demonstrated that the newly-fabricated high-sensitive OxyChip is capable of providing long-term measurements of oxygen concentration in a reliable and repeated manner under clinical conditions.

  6. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  7. Multiprocessor programming environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.B.; Fornaro, R.

    Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.

  8. Shared Versus Distributed Memory Multiprocessors

    DTIC Science & Technology

    1991-01-01

    multiprocessors should hawe shared or dis.trimuted meieo-% ha~ trr ~ g ’’~ de~i c4~accio;, S Cm teaicners argue S trongly tor Outiding (li15 tri huted...Applications, MIT Press (1985). 161 D. Gajski et el., "Cedar," Proc. Compcon, pp. 306-309 (Spring 19S9). 171 S. Ahuja, N. Carriero and D. Gelernter, "Linda

  9. The Minerva Multi-Microprocessor.

    DTIC Science & Technology

    A multiprocessor system is described which is an experiment in low cost, extensible, multiprocessor architectures. Global issues such as inclusion of a central bus, design of the bus arbiter, and methods of interrupt handling are considered. The system initially includes two processor types, based on microprocessors, and these are discussed. Methods for reducing processor demand for the central bus are described.

  10. Reliable, Low-Cost, Low-Weight, Non-Hermetic Coating for MCM Applications

    NASA Technical Reports Server (NTRS)

    Jones, Eric W.; Licari, James J.

    2000-01-01

    Through an Air Force Research Laboratory sponsored STM program, reliable, low-cost, low-weight, non-hermetic coatings for multi-chip-module(MCK applications were developed. Using the combination of Sandia Laboratory ATC-01 test chips, AvanTeco's moisture sensor chips(MSC's), and silicon slices, we have shown that organic and organic/inorganic overcoatings are reliable and practical non-hermetic moisture and oxidation barriers. The use of the MSC and unpassivated ATC-01 test chips provided rapid test results and comparison of moisture barrier quality of the overcoatings. The organic coatings studied were Parylene and Cyclotene. The inorganic coatings were Al2O3 and SiO2. The choice of coating(s) is dependent on the environment that the device(s) will be exposed to. We have defined four(4) classes of environments: Class I(moderate temperature/moderate humidity). Class H(high temperature/moderate humidity). Class III(moderate temperature/high humidity). Class IV(high temperature/high humidity). By subjecting the components to adhesion, FTIR, temperature-humidity(TH), pressure cooker(PCT), and electrical tests, we have determined that it is possible to reduce failures 50-70% for organic/inorganic coated components compared to organic coated components. All materials and equipment used are readily available commercially or are standard in most semiconductor fabrication lines. It is estimated that production cost for the developed technology would range from $1-10/module, compared to $20-200 for hermetically sealed packages.

  11. TRAMP; The next generation data acquisition for RTP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Haren, P.C.; Wijnoltz, F.

    1992-04-01

    The Rijnhuizen Tokamak Project RTP is a medium-sized tokamak experiment, which requires a very reliable data-acquisition system, due to its pulsed nature. Analyzing the limitations of an existing CAMAC-based data-acquisition system showed, that substantial increase of performance and flexibility could best be obtained by the construction of an entirely new system. This paper discusses this system, CALLED TRAMP (Transient Recorder and Amoeba Multi Processor), based on tailor-made transient recorders with a multiprocessor computer system in VME running Amoeba. The performance of TRAMP exceeds the performance of the CAMAC system by a factor of four. The plans to increase the flexibilitymore » and for a further increase of performance are presented.« less

  12. High power diode lasers emitting from 639 nm to 690 nm

    NASA Astrophysics Data System (ADS)

    Bao, L.; Grimshaw, M.; DeVito, M.; Kanskar, M.; Dong, W.; Guan, X.; Zhang, S.; Patterson, J.; Dickerson, P.; Kennedy, K.; Li, S.; Haden, J.; Martinsen, R.

    2014-03-01

    There is increasing market demand for high power reliable red lasers for display and cinema applications. Due to the fundamental material system limit at this wavelength range, red diode lasers have lower efficiency and are more temperature sensitive, compared to 790-980 nm diode lasers. In terms of reliability, red lasers are also more sensitive to catastrophic optical mirror damage (COMD) due to the higher photon energy. Thus developing higher power-reliable red lasers is very challenging. This paper will present nLIGHT's released red products from 639 nm to 690nm, with established high performance and long-term reliability. These single emitter diode lasers can work as stand-alone singleemitter units or efficiently integrate into our compact, passively-cooled Pearl™ fiber-coupled module architectures for higher output power and improved reliability. In order to further improve power and reliability, new chip optimizations have been focused on improving epitaxial design/growth, chip configuration/processing and optical facet passivation. Initial optimization has demonstrated promising results for 639 nm diode lasers to be reliably rated at 1.5 W and 690nm diode lasers to be reliably rated at 4.0 W. Accelerated life-test has started and further design optimization are underway.

  13. Properties of different density genotypes used in dairy cattle evaluation

    USDA-ARS?s Scientific Manuscript database

    Dairy cattle breeders have used a 50K chip since April 2008 and a less expensive, lower density (3K) chip since September 2010 in genomic selection. Evaluations from 3K are less reliable because genotype calls are less accurate and missing markers are imputed. After excluding genotypes with < 90% ca...

  14. Advanced chip designs and novel cooling techniques for brightness scaling of industrial, high power diode laser bars

    NASA Astrophysics Data System (ADS)

    Heinemann, S.; McDougall, S. D.; Ryu, G.; Zhao, L.; Liu, X.; Holy, C.; Jiang, C.-L.; Modak, P.; Xiong, Y.; Vethake, T.; Strohmaier, S. G.; Schmidt, B.; Zimer, H.

    2018-02-01

    The advance of high power semiconductor diode laser technology is driven by the rapidly growing industrial laser market, with such high power solid state laser systems requiring ever more reliable diode sources with higher brightness and efficiency at lower cost. In this paper we report simulation and experimental data demonstrating most recent progress in high brightness semiconductor laser bars for industrial applications. The advancements are in three principle areas: vertical laser chip epitaxy design, lateral laser chip current injection control, and chip cooling technology. With such improvements, we demonstrate disk laser pump laser bars with output power over 250W with 60% efficiency at the operating current. Ion implantation was investigated for improved current confinement. Initial lifetime tests show excellent reliability. For direct diode applications <1 um smile and >96% polarization are additional requirements. Double sided cooling deploying hard solder and optimized laser design enable single emitter performance also for high fill factor bars and allow further power scaling to more than 350W with 65% peak efficiency with less than 8 degrees slow axis divergence and high polarization.

  15. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    PubMed

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  16. Numerical and experimental evaluation of microfluidic sorting devices.

    PubMed

    Taylor, Jay K; Ren, Carolyn L; Stubley, G D

    2008-01-01

    The development of lab-on-a-chip devices calls for the isolation or separation of specific bioparticles or cells. The design of a miniaturized cell-sorting device for handheld operation must follow the strict parameters associated with lab-on-a-chip technology. The limitations include applied voltage, high efficiency of cell-separation, reliability, size, flow control, and cost, among others. Currently used designs have achieved successful levels of cell isolation; however, further improvements in the microfluidic chip design are important to incorporate into larger systems. This study evaluates specific design modifications that contribute to the reduction of required applied potential aiming for developing portable devices, improved operation reliability by minimizing induced pressure disturbance when electrokinetic pumping is employed, and improved flow control by incorporating directing streams achieving dynamic sorting and counting. The chip designs fabricated in glass and polymeric materials include asymmetric channel widths for sample focusing, nonuniform channel depth for minimizing induced pressure disturbance, directing streams to assist particle flow control, and online filters for reducing channel blockage. Fluorescence-based visualization experimental results of electrokinetic focusing, flow field phenomena, and dynamic sorting demonstrate the advantages of the chip design. Numerical simulations in COMSOL are validated by the experimental data and used to investigate the effects of channel geometry and fluid properties on the flow field.

  17. Benchmarking GNU Radio Kernels and Multi-Processor Scheduling

    DTIC Science & Technology

    2013-01-14

    AMD E350 APU , comparable to Atom • ARM Cortex A8 running on a Gumstix Overo on an Ettus USRP E110 The general testing procedure consists of • Build...Intel Atom, and the AMD E350 APU . 3.2 Multi-Processor Scheduling Figure 1: GFLOPs per second through an FFT array on an Intel i7. Example output from

  18. A Real-Time Linux for Multicore Platforms

    DTIC Science & Technology

    2013-12-20

    under ARO support) to obtain a fully-functional OS for supporting real-time workloads on multicore platforms. This system, called LITMUS -RT...to be specified as plugin components. LITMUS -RT is open-source software (available at The views, opinions and/or findings contained in this report... LITMUS -RT (LInux Testbed for MUltiprocessor Scheduling in Real-Time systems), allows different multiprocessor real-time scheduling and

  19. Considerations for Multiprocessor Topologies

    NASA Technical Reports Server (NTRS)

    Byrd, Gregory T.; Delagi, Bruce A.

    1987-01-01

    Choosing a multiprocessor interconnection topology may depend on high-level considerations, such as the intended application domain and the expected number of processors. It certainly depends on low-level implementation details, such as packaging and communications protocols. The authors first use rough measures of cost and performance to characterize several topologies. They then examine how implementation details can affect the realizable performance of a topology.

  20. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    NASA Technical Reports Server (NTRS)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  1. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  2. Design and evaluation of a fault-tolerant multiprocessor using hardware recovery blocks

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1982-01-01

    A fault-tolerant multiprocessor with a rollback recovery mechanism is discussed. The rollback mechanism is based on the hardware recovery block which is a hardware equivalent to the software recovery block. The hardware recovery block is constructed by consecutive state-save operations and several state-save units in every processor and memory module. When a fault is detected, the multiprocessor reconfigures itself to replace the faulty component and then the process originally assigned to the faulty component retreats to one of the previously saved states in order to resume fault-free execution. A mathematical model is proposed to calculate both the coverage of multi-step rollback recovery and the risk of restart. A performance evaluation in terms of task execution time is also presented.

  3. Coping with Health Problems: Developing a Reliable and Valid Multidimensional Measure.

    ERIC Educational Resources Information Center

    Endler, Norman S.; Parker, James D. A.; Summerfeldt, Laura J.

    1998-01-01

    A self-report measure, the Coping with Health Injuries and Problems Scale (CHIP), was developed to identify basic coping dimensions for responding to health problems. The CHIP factor structure, established with samples of 532 adults and 598 adults in Canada, is cross-validated with 390 general medical patients and 286 chronic back pain patients.…

  4. Decapsulation Method for Flip Chips with Ceramics in Microelectronic Packaging

    NASA Astrophysics Data System (ADS)

    Shih, T. I.; Duh, J. G.

    2008-06-01

    The decapsulation of flip chips bonded to ceramic substrates is a challenging task in the packaging industry owing to the vulnerability of the chip surface during the process. In conventional methods, such as manual grinding and polishing, the solder bumps are easily damaged during the removal of underfill, and the thin chip may even be crushed due to mechanical stress. An efficient and reliable decapsulation method consisting of thermal and chemical processes was developed in this study. The surface quality of chips after solder removal is satisfactory for the existing solder rework procedure as well as for die-level failure analysis. The innovative processes included heat-sink and ceramic substrate removal, solder bump separation, and solder residue cleaning from the chip surface. In the last stage, particular temperatures were selected for the removal of eutectic Pb-Sn, high-lead, and lead-free solders considering their respective melting points.

  5. The detection of hepatitis c virus core antigen using afm chips with immobolized aptamers.

    PubMed

    Pleshakova, T O; Kaysheva, A L; Bayzyanova, J М; Anashkina, А S; Uchaikin, V F; Ziborov, V S; Konev, V A; Archakov, A I; Ivanov, Y D

    2018-01-01

    In the present study, the possibility of hepatitis C virus core antigen (HCVcoreAg) detection in buffer solution, using atomic force microscope chip (AFM-chip) with immobilized aptamers, has been demonstrated. The target protein was detected in 1mL of solution at concentrations from 10 -10 М to 10 -13 М. The registration of aptamer/antigen complexes on the chip surface was carried out by atomic force microscopy (AFM). The further mass-spectrometric (MS) identification of AFM-registered objects on the chip surface allowed reliable identification of HCVcoreAg target protein in the complexes. Aptamers, which were designed for therapeutic purposes, have been shown to be effective in HCVcoreAg detection as probe molecules. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Aeroflex Technology as Class-Y Demonstrator

    NASA Technical Reports Server (NTRS)

    Suh, Jong-ook; Agarwal, Shri; Popelar, Scott

    2014-01-01

    Modern space field programmable gate array (FPGA) devices with increased functional density and operational frequency, such as Xilinx Virtex 4 (V4) and S (V5), are packaged in non-hermetic ceramic flip chip forms. These next generation space parts were not qualified to the MIL-PRF-38535 Qualified Manufacturer Listing (QML) class-V when they were released because class-V was only intended for hermetic parts. In order to bring Xilinx V5 type packages into the QML system, it was suggested that class-Y be set up as a new category. From 2010 through 2014, a JEDEC G12 task group developed screening and qualification requirements for Class-Y products. The Document Standardization Division of the Defense Logistics Agency (DLA) has completed an engineering practice study. In parallel with the class-Y efforts, the NASA Electronic Parts and Packaging (NEPP) program has funded JPL to study potential reliability issues of the class-Y products. The major hurdle of this task was the absence of adequate research samples. Figure 1-1 shows schematic diagrams of typical structures of class-Y type products. Typically, class-Y products are either in ceramic flip chip column grid array (CGA) or land grid array (LGA) form. In class-Y packages, underfill and heat spread adhesive materials are directly exposed to the spacecraft environment due to their non-hermeticity. One of the concerns originally raised was that the underfill material could degrade due to the spacecraft environment and negatively impact the reliability of the package. In order to study such issues, it was necessary to use ceramic daisy chain flip chip package samples so that continuity of flip chip solder bumps could be monitored during the reliability tests. However, none of the commercially available class-Y daisy chain parts had electrical connections through flip chip solder bumps; only solder columns were daisy chained, which made it impossible to test continuity of flip chip solder bumps without using extremely costly functional parts. Among space parts manufacturers who were interested in producing class-Y products, Aeroflex Microelectronic Solutions-HiRel had been developing assembly processes using their internal R&D classy type samples. In early 2012, JPL and Aeroflex initiated a collaboration to study reliability of the Aeroflex technology as a class-Y demonstrator.

  7. Bibliography On Multiprocessors And Distributed Processing

    NASA Technical Reports Server (NTRS)

    Miya, Eugene N.

    1988-01-01

    Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.

  8. Method for wiring allocation and switch configuration in a multiprocessor environment

    DOEpatents

    Aridor, Yariv [Zichron Ya'akov, IL; Domany, Tamar [Kiryat Tivon, IL; Frachtenberg, Eitan [Jerusalem, IL; Gal, Yoav [Haifa, IL; Shmueli, Edi [Haifa, IL; Stockmeyer, legal representative, Robert E.; Stockmeyer, Larry Joseph [San Jose, CA

    2008-07-15

    A method for wiring allocation and switch configuration in a multiprocessor computer, the method including employing depth-first tree traversal to determine a plurality of paths among a plurality of processing elements allocated to a job along a plurality of switches and wires in a plurality of D-lines, and selecting one of the paths in accordance with at least one selection criterion.

  9. Generic Software for Emulating Multiprocessor Architectures.

    DTIC Science & Technology

    1985-05-01

    RD-A157 662 GENERIC SOFTWARE FOR EMULATING MULTIPROCESSOR 1/2 AlRCHITECTURES(J) MASSACHUSETTS INST OF TECH CAMBRIDGE U LRS LAB FOR COMPUTER SCIENCE R...AREA & WORK UNIT NUMBERS MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 ____________ I I. CONTROLLING OFFICE NAME AND...aide If neceeasy end Identify by block number) Computer architecture, emulation, simulation, dataf low 20. ABSTRACT (Continue an reverse slde It

  10. Adaptive Backoff Synchronization Techniques

    DTIC Science & Technology

    1989-07-01

    The Simple Code. Technical Report, Lawrence Livermore Laboratory, February 1978. [6] F. Darems-Rogers, D. A. George, V. A. Norton, and G . F. Pfister...Heights, November 1986. 20 [7] Daniel Gajski , David Kuck, Duncan Lawrie, and Ahmed Saleh. Cedar - A Large Scale Multiprocessor. In International...17] Janak H. Patel. Analysis of Multiprocessors with Private Cache Memories. IEEE Transactions on Com- puters, C-31(4):296-304, April 1982. [18] G

  11. The adolescent child health and illness profile. A population-based measure of health.

    PubMed

    Starfield, B; Riley, A W; Green, B F; Ensminger, M E; Ryan, S A; Kelleher, K; Kim-Harris, S; Johnston, D; Vogel, K

    1995-05-01

    This study was designed to test the reliability and validity of an instrument to assess adolescent health status. Reliability and validity were examined by administration to adolescents (ages 11-17 years) in eight schools in two urban areas, one area in Appalachia, and one area in the rural South. Integrity of the domains and subdomains and construct validity were tested in all areas. Test/retest stability, criterion validity, and convergent and discriminant validity were tested in the two urban areas. Iterative testing has resulted in the final form of the CHIP-AE (Child Health and Illness Profile-Adolescent Edition) having 6 domains with 20 subdomains. The domains are Discomfort, Disorders, Satisfaction with Health, Achievement (of age-appropriate social roles), Risks, and Resilience. Tested aspects of reliability and validity have achieved acceptable levels for all retained subdomains. The CHIP-AE in its current form is suitable for assessing the health status of populations and subpopulations of adolescents. Evidence from test-retest stability analyses suggests that the CHIP-AE also can be used to assess changes occurring over time or in response to health services interventions targeted at groups of adolescents.

  12. An automatic chip structure optical inspection system for electronic components

    NASA Astrophysics Data System (ADS)

    Song, Zhichao; Xue, Bindang; Liang, Jiyuan; Wang, Ke; Chen, Junzhang; Liu, Yunhe

    2018-01-01

    An automatic chip structure inspection system based on machine vision is presented to ensure the reliability of electronic components. It consists of four major modules, including a metallographic microscope, a Gigabit Ethernet high-resolution camera, a control system and a high performance computer. An auto-focusing technique is presented to solve the problem that the chip surface is not on the same focusing surface under the high magnification of the microscope. A panoramic high-resolution image stitching algorithm is adopted to deal with the contradiction between resolution and field of view, caused by different sizes of electronic components. In addition, we establish a database to storage and callback appropriate parameters to ensure the consistency of chip images of electronic components with the same model. We use image change detection technology to realize the detection of chip images of electronic components. The system can achieve high-resolution imaging for chips of electronic components with various sizes, and clearly imaging for the surface of chip with different horizontal and standardized imaging for ones with the same model, and can recognize chip defects.

  13. Precise delay measurement through combinatorial logic

    NASA Technical Reports Server (NTRS)

    Burke, Gary R. (Inventor); Chen, Yuan (Inventor); Sheldon, Douglas J. (Inventor)

    2010-01-01

    A high resolution circuit and method for facilitating precise measurement of on-chip delays for FPGAs for reliability studies. The circuit embeds a pulse generator on an FPGA chip having one or more groups of LUTS (the "LUT delay chain"), also on-chip. The circuit also embeds a pulse width measurement circuit on-chip, and measures the duration of the generated pulse through the delay chain. The pulse width of the output pulse represents the delay through the delay chain without any I/O delay. The pulse width measurement circuit uses an additional asynchronous clock autonomous from the main clock and the FPGA propagation delay can be displayed on a hex display continuously for testing purposes.

  14. Framework for analysis of guaranteed QOS systems

    NASA Astrophysics Data System (ADS)

    Chaudhry, Shailender; Choudhary, Alok

    1997-01-01

    Multimedia data is isochronous in nature and entails managing and delivering high volumes of data. Multiprocessors with their large processing power, vast memory, and fast interconnects, are an ideal candidate for the implementation of multimedia applications. Initially, multiprocessors were designed to execute scientific programs and thus their architecture was optimized to provide low message latency and efficiently support regular communication patterns. Hence, they have a regular network topology and most use wormhole routing. The design offers the benefits of a simple router, small buffer size, and network latency that is almost independent of path length. Among the various multimedia applications, video on demand (VOD) server is well-suited for implementation using parallel multiprocessors. Logical models for VOD servers are presently mapped onto multiprocessors. Our paper provides a framework for calculating bounds on utilization of system resources with which QoS parameters for each isochronous stream can be guaranteed. Effects of the architecture of multiprocessors, and efficiency of various local models and mapping on particular architectures can be investigated within our framework. Our framework is based on rigorous proofs and provides tight bounds. The results obtained may be used as the basis for admission control tests. To illustrate the versatility of our framework, we provide bounds on utilization for various logical models applied to mesh connected architectures for a video on demand server. Our results show that worm hole routing can lead to packets waiting for transmission of other packets that apparently share no common resources. This situation is analogous to head-of-the-line blocking. We find that the provision of multiple VCs per link and multiple flit buffers improves utilization (even under guaranteed QoS parameters). This analogous to parallel iterative matching.

  15. Silicon Nanophotonics for Many-Core On-Chip Networks

    NASA Astrophysics Data System (ADS)

    Mohamed, Moustafa

    Number of cores in many-core architectures are scaling to unprecedented levels requiring ever increasing communication capacity. Traditionally, architects follow the path of higher throughput at the expense of latency. This trend has evolved into being problematic for performance in many-core architectures. Moreover, the trends of power consumption is increasing with system scaling mandating nontraditional solutions. Nanophotonics can address these problems, offering benefits in the three frontiers of many-core processor design: Latency, bandwidth, and power. Nanophotonics leverage circuit-switching flow control allowing low latency; in addition, the power consumption of optical links is significantly lower compared to their electrical counterparts at intermediate and long links. Finally, through wave division multiplexing, we can keep the high bandwidth trends without sacrificing the throughput. This thesis focuses on realizing nanophotonics for communication in many-core architectures at different design levels considering reliability challenges that our fabrication and measurements reveal. First, we study how to design on-chip networks for low latency, low power, and high bandwidth by exploiting the full potential of nanophotonics. The design process considers device level limitations and capabilities on one hand, and system level demands in terms of power and performance on the other hand. The design involves the choice of devices, designing the optical link, the topology, the arbitration technique, and the routing mechanism. Next, we address the problem of reliability in on-chip networks. Reliability not only degrades performance but can block communication. Hence, we propose a reliability-aware design flow and present a reliability management technique based on this flow to address reliability in the system. In the proposed flow reliability is modeled and analyzed for at the device, architecture, and system level. Our reliability management technique is superior to existing solutions in terms of power and performance. In fact, our solution can scale to thousand core with low overhead.

  16. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    NASA Astrophysics Data System (ADS)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  17. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  18. Macromolecular Crystal Growth by Means of Microfluidics

    NASA Technical Reports Server (NTRS)

    vanderWoerd, Mark; Ferree, Darren; Spearing, Scott; Monaco, Lisa; Molho, Josh; Spaid, Michael; Brasseur, Mike; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    We have performed a feasibility study in which we show that chip-based, microfluidic (LabChip(TM)) technology is suitable for protein crystal growth. This technology allows for accurate and reliable dispensing and mixing of very small volumes while minimizing bubble formation in the crystallization mixture. The amount of (protein) solution remaining after completion of an experiment is minimal, which makes this technique efficient and attractive for use with proteins, which are difficult or expensive to obtain. The nature of LabChip(TM) technology renders it highly amenable to automation. Protein crystals obtained in our initial feasibility studies were of excellent quality as determined by X-ray diffraction. Subsequent to the feasibility study, we designed and produced the first LabChip(TM) device specifically for protein crystallization in batch mode. It can reliably dispense and mix from a range of solution constituents into two independent growth wells. We are currently testing this design to prove its efficacy for protein crystallization optimization experiments. In the near future we will expand our design to incorporate up to 10 growth wells per LabChip(TM) device. Upon completion, additional crystallization techniques such as vapor diffusion and liquid-liquid diffusion will be accommodated. Macromolecular crystallization using microfluidic technology is envisioned as a fully automated system, which will use the 'tele-science' concept of remote operation and will be developed into a research facility for the International Space Station as well as on the ground.

  19. Scheduling for Locality in Shared-Memory Multiprocessors

    DTIC Science & Technology

    1993-05-01

    Submitted in Partial Fulfillment of the Requirements for the Degree ’)iIC Q(JALfryT INSPECTED 5 DOCTOR OF PHILOSOPHY I Accesion For Supervised by NTIS CRAM... architecture on parallel program performance, explain the implications of this trend on popular parallel programming models, and propose system software to 0...decomoosition and scheduling algorithms. I. SUIUECT TERMS IS. NUMBER OF PAGES shared-memory multiprocessors; architecture trends; loop 110 scheduling

  20. The MIT Alewife Machine: A Large-Scale Distributed-Memory Multiprocessor

    DTIC Science & Technology

    1991-06-01

    Symposium on Compiler Construction, June 1986. [14] Daniel Gajski , David Kuck, Duncan Lawrie, and Ahmed Saleh. Cedar - A Large Scale Multiprocessor. In...Directory Methods. In Proceedings 17th Annual International Symposium on Computer Architecture, June 1990. [31] G . M. Papadopoulos and D.E. Culler...Monsoon: An Explicit Token-Store Ar- chitecture. In Proceedings 17th Annual International Symposium on Computer Architecture, June 1990. [32] G . F

  1. Adaptive Backoff Synchronization Techniques

    DTIC Science & Technology

    1989-06-01

    The Simple Code. Technical Report, Lawrence Livermore Laboratory, February 1978. [6J F. Darems-Rogers, D. A. George, V. A. Norton, and G . F. Pfister...Heights, November 1986. 20 [7] Daniel Gajski , David Kuck, Duncan Lawrie, and Ahmed Saleh. Cedar - A Large Scale Multiprocessor. In International Conference...17] Janak H. Patel. Analysis of Multiprocessors with Private Cache Memories. IEEE Transactions on Com- puters, C-31(4):296-304, April 1982. [18] G

  2. Operating system for a real-time multiprocessor propulsion system simulator

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1984-01-01

    The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.

  3. Insertion of GaAs MMICs into EW systems

    NASA Astrophysics Data System (ADS)

    Schineller, E. R.; Pospishil, A.; Grzyb, J.

    1989-09-01

    Development activities on a microwave/mm-wave monolithic IC (MIMIC) program are described, as well as the methodology for inserting these GaAs IC chips into several EW systems. The generic EW chip set developed on the MIMIC program consists of 23 broadband chip types, including amplifiers, oscillators, mixers, switches, variable attenuators, power dividers, and power combiners. These chips are being designed for fabrication using the multifunction self-aligned gate process. The benefits from GaAs IC insertion are quantified by a comparison of hardware units fabricated with existing MIC and digital ECL technology and the same units manufactured with monolithic technology. It is found that major improvements in cost, reliability, size, weight, and performance can be realized. Examples illustrating the methodology for technology insertion are presented.

  4. The FORCE - A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  5. Reducing Response Time Bounds for DAG-Based Task Systems on Heterogeneous Multicore Platforms

    DTIC Science & Technology

    2016-01-01

    synchronous parallel tasks on multicore platforms. In 25th ECRTS, 2013. [10] U. Devi. Soft Real - Time Scheduling on Multiprocessors. PhD thesis...report, Washington University in St Louis, 2014. [18] C. Liu and J. Anderson. Supporting soft real - time DAG-based sys- tems on multiprocessors with...analysis for DAG-based real - time task systems im- plemented on heterogeneous multicore platforms. The spe- cific analysis problem that is considered was

  6. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  7. Dynamic file-access characteristics of a production parallel scientific workload

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1994-01-01

    Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.

  8. A Multiprocessor Operating System Simulator

    NASA Technical Reports Server (NTRS)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  9. A short review on thermosonic flip chip bonding

    NASA Astrophysics Data System (ADS)

    Suppiah, Sarveshvaran; Ong, Nestor Rubio; Sauli, Zaliman; Sarukunaselan, Karunavani; Alcain, Jesselyn Barro; Shahimin, Mukhzeer Mohamad; Retnasamy, Vithyacharan

    2017-09-01

    This review is to study the evolution and key findings, critical technical challenges, solutions and bonding equipment of thermosonic flip chip bonding. Based on the review done, it was found that ultrasonic power, bonding time and force are the three main critical parameters need to be optimized in order to achieve sound and reliable bonding between the die and substrate. A close monitoring of the ultrasonic power helped to prevent over bonding phenomena on flexible substrate. Gold stud bumping is commonly used in thermosonic bonding compared to solder due to its better reliability obtained in the LED and optoelectronic packages. The review comprised short details on the available thermosonic bonding equipment in the semiconductor industry as well.

  10. Spike-In Normalization of ChIP Data Using DNA-DIG-Antibody Complex.

    PubMed

    Eberle, Andrea B

    2018-01-01

    Chromatin immunoprecipitation (ChIP) is a widely used method to determine the occupancy of specific proteins within the genome, helping to unravel the function and activity of specific genomic regions. In ChIP experiments, normalization of the obtained data by a suitable internal reference is crucial. However, particularly when comparing differently treated samples, such a reference is difficult to identify. Here, a simple method to improve the accuracy and reliability of ChIP experiments by the help of an external reference is described. An artificial molecule, composed of a well-defined digoxigenin (DIG) labeled DNA fragment in complex with an anti-DIG antibody, is synthesized and added to each chromatin sample before immunoprecipitation. During the ChIP procedure, the DNA-DIG-antibody complex undergoes the same treatments as the chromatin and is therefore purified and quantified together with the chromatin of interest. This external reference compensates for variability during the ChIP routine and improves the similarity between replicates, thereby emphasizing the biological differences between samples.

  11. Stress analysis of ultra-thin silicon chip-on-foil electronic assembly under bending

    NASA Astrophysics Data System (ADS)

    Wacker, Nicoleta; Richter, Harald; Hoang, Tu; Gazdzicki, Pawel; Schulze, Mathias; Angelopoulos, Evangelos A.; Hassan, Mahadi-Ul; Burghartz, Joachim N.

    2014-09-01

    In this paper we investigate the bending-induced uniaxial stress at the top of ultra-thin (thickness \\leqslant 20 μm) single-crystal silicon (Si) chips adhesively attached with the aid of an epoxy glue to soft polymeric substrate through combined theoretical and experimental methods. Stress is first determined analytically and numerically using dedicated models. The theoretical results are validated experimentally through piezoresistive measurements performed on complementary metal-oxide-semiconductor (CMOS) transistors built on specially designed chips, and through micro-Raman spectroscopy investigation. Stress analysis of strained ultra-thin chips with CMOS circuitry is crucial, not only for the accurate evaluation of the piezoresistive behavior of the built-in devices and circuits, but also for reliability and deformability analysis. The results reveal an uneven bending-induced stress distribution at the top of the Si-chip that decreases from the central area towards the chip's edges along the bending direction, and increases towards the other edges. Near these edges, stress can reach very high values, facilitating the emergence of cracks causing ultimate chip failure.

  12. Property-driven functional verification technique for high-speed vision system-on-chip processor

    NASA Astrophysics Data System (ADS)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  13. A fast and reliable readout method for quantitative analysis of surface-enhanced Raman scattering nanoprobes on chip surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Hyejin; Jeong, Sinyoung; Ko, Eunbyeol

    2015-05-15

    Surface-enhanced Raman scattering techniques have been widely used for bioanalysis due to its high sensitivity and multiplex capacity. However, the point-scanning method using a micro-Raman system, which is the most common method in the literature, has a disadvantage of extremely long measurement time for on-chip immunoassay adopting a large chip area of approximately 1-mm scale and confocal beam point of ca. 1-μm size. Alternative methods such as sampled spot scan with high confocality and large-area scan method with enlarged field of view and low confocality have been utilized in order to minimize the measurement time practically. In this study, wemore » analyzed the two methods in respect of signal-to-noise ratio and sampling-led signal fluctuations to obtain insights into a fast and reliable readout strategy. On this basis, we proposed a methodology for fast and reliable quantitative measurement of the whole chip area. The proposed method adopted a raster scan covering a full area of 100 μm × 100 μm region as a proof-of-concept experiment while accumulating signals in the CCD detector for single spectrum per frame. One single scan with 10 s over 100 μm × 100 μm area yielded much higher sensitivity compared to sampled spot scanning measurements and no signal fluctuations attributed to sampled spot scan. This readout method is able to serve as one of key technologies that will bring quantitative multiplexed detection and analysis into practice.« less

  14. ChIP-chip.

    PubMed

    Kim, Tae Hoon; Dekker, Job

    2018-05-01

    ChIP-chip can be used to analyze protein-DNA interactions in a region-wide and genome-wide manner. DNA microarrays contain PCR products or oligonucleotide probes that are designed to represent genomic sequences. Identification of genomic sites that interact with a specific protein is based on competitive hybridization of the ChIP-enriched DNA and the input DNA to DNA microarrays. The ChIP-chip protocol can be divided into two main sections: Amplification of ChIP DNA and hybridization of ChIP DNA to arrays. A large amount of DNA is required to hybridize to DNA arrays, and hybridization to a set of multiple commercial arrays that represent the entire human genome requires two rounds of PCR amplifications. The relative hybridization intensity of ChIP DNA and that of the input DNA is used to determine whether the probe sequence is a potential site of protein-DNA interaction. Resolution of actual genomic sites bound by the protein is dependent on the size of the chromatin and on the genomic distance between the probes on the array. As with expression profiling using gene chips, ChIP-chip experiments require multiple replicates for reliable statistical measure of protein-DNA interactions. © 2018 Cold Spring Harbor Laboratory Press.

  15. Characterization of System on a Chip (SoC) Single Event Upset (SEU) Responses Using SEU Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  16. Cache as point of coherence in multiprocessor system

    DOEpatents

    Blumrich, Matthias A.; Ceze, Luis H.; Chen, Dong; Gara, Alan; Heidelberger, Phlip; Ohmacht, Martin; Steinmacher-Burow, Burkhard; Zhuang, Xiaotong

    2016-11-29

    In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory.

  17. Multiprocessor Real-Time Locking Protocols for Replicated Resources

    DTIC Science & Technology

    2016-07-01

    circular buffer of slots, each representing a discrete segment of time . For example, if the maintenance of a timing wheel occurs af- ter an interrupt ...Experimental Evaluation To evaluate Algs. 2, 3, and 4, we conducted a series of ex- periments in which we measured relevant overheads and blocking times . We...Multiprocessor Real- Time Locking Protocols for Replicated Resources ∗ Catherine E. Jarrett1, Kecheng Yang1, Ming Yang1, Pontus Ekberg2, and James H

  18. Cedar-a large scale multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.; Kuck, D.; Lawrie, D.

    1983-01-01

    This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.

  19. The force on the flex: Global parallelism and portability

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1986-01-01

    A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.

  20. 3D packaging of a microfluidic system with sensory applications

    NASA Astrophysics Data System (ADS)

    Morrissey, Anthony; Kelly, Gerard; Alderman, John C.

    1997-09-01

    Among the main benefits of microsystem technology are its contributions to cost reductio, reliability and improved performance. however, the packaging of microsystems, and particularly microsensor, has proven to be one of the biggest limitations to their commercialization and the packaging of silicon sensor devices can be the most costly part of their fabrication. This paper describes the integration of 3D packaging of a microsystem. Central to the operation of the 3D demonstrator is a micromachined silicon membrane pump to supply fluids to a sensing chamber constructed about the active area of a sensor chip. This chip carries ISFET based chemical sensors, pressure sensors and thermal sensors. The electronics required for controlling and regulating the activity of the various sensors ar also available on this chip and as other chips in the 3D assembly. The demonstrator also contains a power supply module with optical fiber interconnections. All of these modules are integrated into a single plastic- encapsulated 3D vertical multichip module. The reliability of such a structure, initially proposed by Val was demonstrated by Barrett et al. An additional module available for inclusion in some of our assemblies is a test chip capable of measuring the packaging-induced stress experienced during and after assembly. The packaging process described produces a module with very high density and utilizes standard off-the-shelf components to minimize costs. As the sensor chip and micropump include micromachined silicon membranes and microvalves, the packaging of such structures has to allow consideration for the minimization of the packaging-induced stresses. With this in mind, low stress techniques, including the use of soft glob-top materials, were employed.

  1. Combinatorics and Probability: Six- to Ten-Year-Olds Reliably Predict Whether a Relation Will Occur

    ERIC Educational Resources Information Center

    Gonzalez, Michel; Girotto, Vittorio

    2011-01-01

    Young children are able to judge which of two possibilities is more likely to occur when these possibilities are characterized by a simple property, like color ("Is it more likely to draw a red chip or a blue chip?"). Here we ask whether they can do so when the possibilities concern a relation between simple properties ("Is it more likely to draw…

  2. Investigation of electromigration behavior in lead-free flip chip solder bumps

    NASA Astrophysics Data System (ADS)

    Kalkundri, Kaustubh Jayant

    Packaging technology has also evolved over time in an effort to keep pace with the demanding requirements. Wirebond and flip chip packaging technologies have become extremely versatile and ubiquitous in catering to myriad applications due to their inherent potential. This research is restricted strictly to flip chip technology. This technology incorporates a process in which the bare chip is turned upside down, i.e., active face down, and is bonded through the I/O to the substrate, hence called flip chip. A solder interconnect that provides electrical connection between the chip and substrate is bumped on a processed silicon wafer prior to dicing for die-attach. The assembly is then reflow-soldered followed by the underfill process to provide the required encapsulation. The demand for smaller and lighter products has increased the number of I/Os without increasing the package sizes, thereby drastically reducing the size of the flip chip solder bumps and their pitch. Reliability assessment and verification of these devices has gained tremendous importance due to their shrinking size. To add to the complexity, changing material sets that are results of recently enacted lead-free solder legislations have raised some compatibility issues that are already being researched. In addition to materials and process related flip chip challenges such as solder-flux compatibility, Coefficient of Thermal Expansion (CTE) mismatch, underfill-flux compatibility and thermal management, flip chip packages are vulnerable to a comparatively newer challenge, namely electromigration observed in solder bumps. It is interesting to note that electromigration has come to the forefront of challenges only recently. It has been exacerbated by the reduction in bump cross-section due to the seemingly continuous shrinking in package size over time. The focus of this research was to understand the overall electromigration behavior in lead-free (SnAg) flip chip solder bumps. The objectives of the research were to comprehend the physics of failure mechanism in electromigration for lead-free solder bumps assembled in a flip chip ceramic package having thick copper under bump metallization and to estimate the unknown critical material parameters from Black's equation that describe failure due to electromigration. In addition, the intent was to verify the 'use condition reliability' by extrapolation from experimental conditions. The methodology adopted for this research was comprised of accelerated electromigration tests on SnAg flip chip solder bumps assembled on ceramic substrate with a thick copper under bump metallization. The experimental approach was comprised of elaborate measurement of the temperature of each sample by separate metallization resistance exhibiting positive resistance characteristics to overcome the variation in Joule heating. After conducting the constant current experiments and analyzing the failed samples, it was found that the primary electromigration failure mode observed was the dissolution of the thick copper under bump metallization in the solder, leading to a change in resistance. The lifetime data obtained from different experiments was solved simultaneously using a multiple regression approach to yield the unknown Black's equation parameters of current density exponent and activation energy. In addition to the implementation of a systematic failure analysis and data analysis procedure, it was also deduced that thermomigration due to the temperature gradient across the chip does impact the overall electromigration behavior. This research and the obtained results were significant in bridging the gap for an overall understanding of this critical failure mode observed in flip chip solder bumps. The measurement of each individual sample temperature instead of an average temperature enabled an accurate analysis for predicting the 'use condition reliability' of a comparable product. The obtained results and the conclusions can be used as potential inputs in future designs and newer generations of flip chip devices that might undergo aggressive scaling. This will enable these devices to retain their functionality during their intended useful life with minimal threat of failure due to the potent issue of electromigration. (Abstract shortened by UMI.)

  3. ChIP-re-ChIP: Co-occupancy Analysis by Sequential Chromatin Immunoprecipitation.

    PubMed

    Beischlag, Timothy V; Prefontaine, Gratien G; Hankinson, Oliver

    2018-01-01

    Chromatin immunoprecipitation (ChIP) exploits the specific interactions between DNA and DNA-associated proteins. It can be used to examine a wide range of experimental parameters. A number of proteins bound at the same genomic location can identify a multi-protein chromatin complex where several proteins work together to regulate gene transcription or chromatin configuration. In many instances, this can be achieved using sequential ChIP; or simply, ChIP-re-ChIP. Whether it is for the examination of specific transcriptional or epigenetic regulators, or for the identification of cistromes, the ability to perform a sequential ChIP adds a higher level of power and definition to these analyses. In this chapter, we describe a simple and reliable method for the sequential ChIP assay.

  4. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  5. Measurement and analysis of workload effects on fault latency in real-time systems

    NASA Technical Reports Server (NTRS)

    Woodbury, Michael H.; Shin, Kang G.

    1990-01-01

    The authors demonstrate the need to address fault latency in highly reliable real-time control computer systems. It is noted that the effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. The authors present experimental evidence indicating that the duration of fault latency is dependent on workload. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method makes it possible to derive the distribution of fault latency duration. Experimental results obtained from the fault-tolerant multiprocessor at the NASA Airlab are presented and discussed.

  6. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  7. Backend Control Processor for a Multi-Processor Relational Database Computer System.

    DTIC Science & Technology

    1984-12-01

    SCHOOL OF ENGI. UNCRSIFID MPONTIFF DEC 84 AFXT/GCS/ENG/84D-22 F/O 9/2 L ommhhhhmhhml mhhhommhhhhhm i-2 8 -- U0. 11111= Q. 2 111.8IIII- 1111111..6...THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...development of a Backend Multi-Processor Relational Database Computer System. This thesis addresses a single component of this system, the Backend Control

  8. Visual monitoring of autonomous life sciences experimentation

    NASA Technical Reports Server (NTRS)

    Blank, G. E.; Martin, W. N.

    1987-01-01

    The design and implementation of a computerized visual monitoring system to aid in the monitoring and control of life sciences experiments on board a space station was investigated. A likely multiprocessor design was chosen, a plausible life science experiment with which to work was defined, the theoretical issues involved in the programming of a visual monitoring system for the experiment was considered on the multiprocessor, a system for monitoring the experiment was designed, and simulations of such a system was implemented on a network of Apollo workstations.

  9. A CAMAC-VME-Macintosh data acquisition system for nuclear experiments

    NASA Astrophysics Data System (ADS)

    Anzalone, A.; Giustolisi, F.

    1989-10-01

    A multiprocessor system for data acquisition and analysis in low-energy nuclear physics has been realized. The system is built around CAMAC, the VMEbus, and the Macintosh PC. Multiprocessor software has been developed, using RTF, MACsys, and CERN cross-software. The execution of several programs that run on several VME CPUs and on an external PC is coordinated by a mailbox protocol. No operating system is used on the VME CPUs. The hardware, software, and system performance are described.

  10. Performance Evaluation of Parallel Algorithms and Architectures in Concurrent Multiprocessor Systems

    DTIC Science & Technology

    1988-09-01

    HEP and Other Parallel processors, Report no. ANL-83-97, Argonne National Laboratory, Argonne, Ill. 1983. [19] Davidson, G . S. A Practical Paradigm for...IEEE Comp. Soc., 1986. [241 Peir, Jih-kwon, and D. Gajski , "CAMP: A Programming Aide For Multiprocessors," Proc. 1986 ICPP, IEEE Comp. Soc., pp475...482. [251 Pfister, G . F., and V. A. Norton, "Hot Spot Contention and Combining in Multistage Interconnection Networks,"IEEE Trans. Comp., C-34, Oct

  11. Mechanisms and FEM Simulation of Chip Formation in Orthogonal Cutting In-Situ TiB₂/7050Al MMC.

    PubMed

    Xiong, Yifeng; Wang, Wenhu; Jiang, Ruisong; Lin, Kunyang; Shao, Mingwei

    2018-04-15

    The in-situ TiB₂/7050Al composite is a new kind of Al-based metal matrix composite (MMC) with super properties, such as low density, improved strength, and wear resistance. This paper, for a deep insight into its cutting performance, involves a study of the chip formation process and finite element simulation during orthogonal cutting in-situ TiB₂/7050Al MMC. With chips, material properties, cutting forces, and tool geometry parameters, the Johnson-Cook (J-C) constitutive equation of in-situ TiB₂/7050Al composite was established. Then, the cutting simulation model was established by applying the Abaqus-Explicit method, and the serrated chip, shear plane, strain rate, and temperature were analyzed. The experimental and simulation results showed that the obtained material's constitutive equation was of high reliability, and the saw-tooth chips occurred commonly under either low or high cutting speed and small or large feed rate. From result analysis, it was found that the mechanisms of chip formation included plastic deformation, adiabatic shear, shearing slip, and crack extension. In addition, it was found that the existence of small, hard particles reduced the ductility of the MMC and resulted in segmental chips.

  12. Scheduling Independent Partitions in Integrated Modular Avionics Systems

    PubMed Central

    Du, Chenglie; Han, Pengcheng

    2016-01-01

    Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013

  13. Microcircuit Device Reliability Digital Detailed Data

    DTIC Science & Technology

    1976-01-01

    TYPE s No. FUNCTION A LASS PINS TEMP. TYPE CLASS LEVEL I eFAILED 8 NO. CHIP TEST APPL. TEST PAR1 t T AGATES PROTECT. DATE E:V. D TYPE HOURST :708 FLIP...LEVEL # EFAILED s a NO. t CHIP i TEST 3 APPL. a TEST I PAR! 3 a GATES s PROTECT. a DATE 3 ENV. t TYPE I 3 -OUHb s 354H0( 3 GATE C-I CDIP 14 150C :11.A

  14. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  15. A multiprocessor operating system simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, G.M.; Campbell, R.H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT and T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows thatmore » of the Choices family of operating systems for loosely and tightly coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.« less

  16. Solution of large nonlinear quasistatic structural mechanics problems on distributed-memory multiprocessor computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanford, M.

    1997-12-31

    Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less

  17. Analysis of a Multiprocessor Guidance Computer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Maltach, E. G.

    1969-01-01

    The design of the next generation of spaceborne digital computers is described. It analyzes a possible multiprocessor computer configuration. For the analysis, a set of representative space computing tasks was abstracted from the Lunar Module Guidance Computer programs as executed during the lunar landing, from the Apollo program. This computer performs at this time about 24 concurrent functions, with iteration rates from 10 times per second to once every two seconds. These jobs were tabulated in a machine-independent form, and statistics of the overall job set were obtained. It was concluded, based on a comparison of simulation and Markov results, that the Markov process analysis is accurate in predicting overall trends and in configuration comparisons, but does not provide useful detailed information in specific situations. Using both types of analysis, it was determined that the job scheduling function is a critical one for efficiency of the multiprocessor. It is recommended that research into the area of automatic job scheduling be performed.

  18. OSCAR API for Real-Time Low-Power Multicores and Its Performance on Multicores and SMP Servers

    NASA Astrophysics Data System (ADS)

    Kimura, Keiji; Mase, Masayoshi; Mikami, Hiroki; Miyamoto, Takamichi; Shirako, Jun; Kasahara, Hironori

    OSCAR (Optimally Scheduled Advanced Multiprocessor) API has been designed for real-time embedded low-power multicores to generate parallel programs for various multicores from different vendors by using the OSCAR parallelizing compiler. The OSCAR API has been developed by Waseda University in collaboration with Fujitsu Laboratory, Hitachi, NEC, Panasonic, Renesas Technology, and Toshiba in an METI/NEDO project entitled "Multicore Technology for Realtime Consumer Electronics." By using the OSCAR API as an interface between the OSCAR compiler and backend compilers, the OSCAR compiler enables hierarchical multigrain parallel processing with memory optimization under capacity restriction for cache memory, local memory, distributed shared memory, and on-chip/off-chip shared memory; data transfer using a DMA controller; and power reduction control using DVFS (Dynamic Voltage and Frequency Scaling), clock gating, and power gating for various embedded multicores. In addition, a parallelized program automatically generated by the OSCAR compiler with OSCAR API can be compiled by the ordinary OpenMP compilers since the OSCAR API is designed on a subset of the OpenMP. This paper describes the OSCAR API and its compatibility with the OSCAR compiler by showing code examples. Performance evaluations of the OSCAR compiler and the OSCAR API are carried out using an IBM Power5+ workstation, an IBM Power6 high-end SMP server, and a newly developed consumer electronics multicore chip RP2 by Renesas, Hitachi and Waseda. From the results of scalability evaluation, it is found that on an average, the OSCAR compiler with the OSCAR API can exploit 5.8 times speedup over the sequential execution on the Power5+ workstation with eight cores and 2.9 times speedup on RP2 with four cores, respectively. In addition, the OSCAR compiler can accelerate an IBM XL Fortran compiler up to 3.3 times on the Power6 SMP server. Due to low-power optimization on RP2, the OSCAR compiler with the OSCAR API achieves a maximum power reduction of 84% in the real-time execution mode.

  19. A macrochip interconnection network enabled by silicon nanophotonic devices.

    PubMed

    Zheng, Xuezhe; Cunningham, John E; Koka, Pranay; Schwetman, Herb; Lexau, Jon; Ho, Ron; Shubin, Ivan; Krishnamoorthy, Ashok V; Yao, Jin; Mekis, Attila; Pinguet, Thierry

    2010-03-01

    We present an advanced wavelength-division multiplexing point-to-point network enabled by silicon nanophotonic devices. This network offers strictly non-blocking all-to-all connectivity while maximizing bisection bandwidth, making it ideal for multi-core and multi-processor interconnections. We introduce one of the key components, the nanophotonic grating coupler, and discuss, for the first time, how this device can be useful for practical implementations of the wavelength-division multiplexing network using optical proximity communications. Finite difference time-domain simulation of the nanophotonic grating coupler device indicates that it can be made compact (20 microm x 50 microm), low loss (3.8 dB), and broadband (100 nm). These couplers require subwavelength material modulation at the nanoscale to achieve the desired functionality. We show that optical proximity communication provides unmatched optical I/O bandwidth density to electrical chips, which enables the application of wavelength-division multiplexing point-to-point network in macrochip with unprecedented bandwidth-density. The envisioned physical implementation is discussed. The benefits of such an interconnect network include a 5-6x improvement in latency when compared to a purely electronic implementation. Performance analysis shows that the wavelength-division multiplexing point-to-point network offers better overall performance over other optical network architectures.

  20. Implementation of MPEG-2 encoder to multiprocessor system using multiple MVPs (TMS320C80)

    NASA Astrophysics Data System (ADS)

    Kim, HyungSun; Boo, Kenny; Chung, SeokWoo; Choi, Geon Y.; Lee, YongJin; Jeon, JaeHo; Park, Hyun Wook

    1997-05-01

    This paper presents the efficient algorithm mapping for the real-time MPEG-2 encoding on the KAIST image computing system (KICS), which has a parallel architecture using five multimedia video processors (MVPs). The MVP is a general purpose digital signal processor (DSP) of Texas Instrument. It combines one floating-point processor and four fixed- point DSPs on a single chip. The KICS uses the MVP as a primary processing element (PE). Two PEs form a cluster, and there are two processing clusters in the KICS. Real-time MPEG-2 encoder is implemented through the spatial and the functional partitioning strategies. Encoding process of spatially partitioned half of the video input frame is assigned to ne processing cluster. Two PEs perform the functionally partitioned MPEG-2 encoding tasks in the pipelined operation mode. One PE of a cluster carries out the transform coding part and the other performs the predictive coding part of the MPEG-2 encoding algorithm. One MVP among five MVPs is used for system control and interface with host computer. This paper introduces an implementation of the MPEG-2 algorithm with a parallel processing architecture.

  1. SABRE: a bio-inspired fault-tolerant electronic architecture.

    PubMed

    Bremner, P; Liu, Y; Samie, M; Dragffy, G; Pipe, A G; Tempesti, G; Timmis, J; Tyrrell, A M

    2013-03-01

    As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance.

  2. Low temperature co-fired ceramic packaging of CMOS capacitive sensor chip towards cell viability monitoring.

    PubMed

    Halonen, Niina; Kilpijärvi, Joni; Sobocinski, Maciej; Datta-Chaudhuri, Timir; Hassinen, Antti; Prakash, Someshekar B; Möller, Peter; Abshire, Pamela; Kellokumpu, Sakari; Lloyd Spetz, Anita

    2016-01-01

    Cell viability monitoring is an important part of biosafety evaluation for the detection of toxic effects on cells caused by nanomaterials, preferably by label-free, noninvasive, fast, and cost effective methods. These requirements can be met by monitoring cell viability with a capacitance-sensing integrated circuit (IC) microchip. The capacitance provides a measurement of the surface attachment of adherent cells as an indication of their health status. However, the moist, warm, and corrosive biological environment requires reliable packaging of the sensor chip. In this work, a second generation of low temperature co-fired ceramic (LTCC) technology was combined with flip-chip bonding to provide a durable package compatible with cell culture. The LTCC-packaged sensor chip was integrated with a printed circuit board, data acquisition device, and measurement-controlling software. The packaged sensor chip functioned well in the presence of cell medium and cells, with output voltages depending on the medium above the capacitors. Moreover, the manufacturing of microfluidic channels in the LTCC package was demonstrated.

  3. GridPix detectors: Production and beam test results

    NASA Astrophysics Data System (ADS)

    Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-12-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.

  4. Fabrication and Qualification of Coated Chip-on-Board Technology for Miniaturized Space Systems

    NASA Technical Reports Server (NTRS)

    Maurer, R. H.; Le, B. Q.; Nhan, E.; Lew, A. L.; Darrin, M. Ann Garrison

    1997-01-01

    The results of a study carried out in order to manufacture and verify the quality of chip-on-board (COB) packaging technology are presented. The COB, designed for space applications, was tested under environmental stresses, temperature cycling, and temperature-humidity-bias. Both robustness in space applications and in environmental protection on the ground-complete reliability without hermeticity were searched for. The epoxy-parylene combinations proved to be superior to other materials tested.

  5. A language comparison for scientific computing on MIMD architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.

    1989-01-01

    Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.

  6. The MSFC UNIVAC 1108 EXEC 8 simulation model

    NASA Technical Reports Server (NTRS)

    Williams, T. G.; Richards, F. M.; Weatherbee, J. E.; Paul, L. K.

    1972-01-01

    A model is presented which simulates the MSFC Univac 1108 multiprocessor system. The hardware/operating system is described to enable a good statistical measurement of the system behavior. The performance of the 1108 is evaluated by performing twenty-four different experiments designed to locate system bottlenecks and also to test the sensitivity of system throughput with respect to perturbation of the various Exec 8 scheduling algorithms. The model is implemented in the general purpose system simulation language and the techniques described can be used to assist in the design, development, and evaluation of multiprocessor systems.

  7. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    DOEpatents

    Ohmacht, Martin

    2017-08-15

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  8. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    DOEpatents

    Ohmacht, Martin

    2014-09-09

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  9. Multiprocessor system with multiple concurrent modes of execution

    DOEpatents

    Ahn, Daniel; Ceze, Luis H; Chen, Dong; Gara, Alan; Heidelberger, Philip; Ohmacht, Martin

    2013-12-31

    A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (IDs) are allocated to speculative threads from a pool of available numbers. The pool is divided into domains, with each domain being assigned to a mode of speculation. Modes of speculation include TM, TLS, and rollback. Allocation of the IDs is carried out with respect to a central state table and using hardware pointers. The IDs are used for writing different versions of speculative results in different ways of a set in a cache memory.

  10. Multiprocessor system with multiple concurrent modes of execution

    DOEpatents

    Ahn, Daniel; Ceze, Luis H.; Chen, Dong Chen; Gara, Alan; Heidelberger, Philip; Ohmacht, Martin

    2016-11-22

    A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (IDs) are allocated to speculative threads from a pool of available numbers. The pool is divided into domains, with each domain being assigned to a mode of speculation. Modes of speculation include TM, TLS, and rollback. Allocation of the IDs is carried out with respect to a central state table and using hardware pointers. The IDs are used for writing different versions of speculative results in different ways of a set in a cache memory.

  11. Hyperswitch communication network

    NASA Technical Reports Server (NTRS)

    Peterson, J.; Pniel, M.; Upchurch, E.

    1991-01-01

    The Hyperswitch Communication Network (HCN) is a large scale parallel computer prototype being developed at JPL. Commercial versions of the HCN computer are planned. The HCN computer being designed is a message passing multiple instruction multiple data (MIMD) computer, and offers many advantages in price-performance ratio, reliability and availability, and manufacturing over traditional uniprocessors and bus based multiprocessors. The design of the HCN operating system is a uniquely flexible environment that combines both parallel processing and distributed processing. This programming paradigm can achieve a balance among the following competing factors: performance in processing and communications, user friendliness, and fault tolerance. The prototype is being designed to accommodate a maximum of 64 state of the art microprocessors. The HCN is classified as a distributed supercomputer. The HCN system is described, and the performance/cost analysis and other competing factors within the system design are reviewed.

  12. Generalized hypercube structures and hyperswitch communication network

    NASA Technical Reports Server (NTRS)

    Young, Steven D.

    1992-01-01

    This paper discusses an ongoing study that uses a recent development in communication control technology to implement hybrid hypercube structures. These architectures are similar to binary hypercubes, but they also provide added connectivity between the processors. This added connectivity increases communication reliability while decreasing the latency of interprocessor message passing. Because these factors directly determine the speed that can be obtained by multiprocessor systems, these architectures are attractive for applications such as remote exploration and experimentation, where high performance and ultrareliability are required. This paper describes and enumerates these architectures and discusses how they can be implemented with a modified version of the hyperswitch communication network (HCN). The HCN is analyzed because it has three attractive features that enable these architectures to be effective: speed, fault tolerance, and the ability to pass multiple messages simultaneously through the same hyperswitch controller.

  13. Mechanisms and FEM Simulation of Chip Formation in Orthogonal Cutting In-Situ TiB2/7050Al MMC

    PubMed Central

    Wang, Wenhu; Jiang, Ruisong; Lin, Kunyang; Shao, Mingwei

    2018-01-01

    The in-situ TiB2/7050Al composite is a new kind of Al-based metal matrix composite (MMC) with super properties, such as low density, improved strength, and wear resistance. This paper, for a deep insight into its cutting performance, involves a study of the chip formation process and finite element simulation during orthogonal cutting in-situ TiB2/7050Al MMC. With chips, material properties, cutting forces, and tool geometry parameters, the Johnson–Cook (J–C) constitutive equation of in-situ TiB2/7050Al composite was established. Then, the cutting simulation model was established by applying the Abaqus–Explicit method, and the serrated chip, shear plane, strain rate, and temperature were analyzed. The experimental and simulation results showed that the obtained material’s constitutive equation was of high reliability, and the saw-tooth chips occurred commonly under either low or high cutting speed and small or large feed rate. From result analysis, it was found that the mechanisms of chip formation included plastic deformation, adiabatic shear, shearing slip, and crack extension. In addition, it was found that the existence of small, hard particles reduced the ductility of the MMC and resulted in segmental chips. PMID:29662047

  14. Thin-film chip-to-substrate interconnect and methods for making same

    DOEpatents

    Tuckerman, D.B.

    1988-06-06

    Integrated circuit chips are electrically connected to a silicon wafer interconnection substrate. Thin film wiring is fabricated down bevelled edges of the chips. A subtractive wire fabrication method uses a series of masks and etching steps to form wires in a metal layer. An additive method direct laser writes or deposits very thin lines which can then be plated up to form wires. A quasi-additive or subtractive/additive method forms a pattern of trenches to expose a metal surface which can nucleate subsequent electrolytic deposition of wires. Low inductance interconnections on a 25 micron pitch (1600 wires on a 1 cm square chip) can be produced. The thin film hybrid interconnect eliminates solder joints or welds, and minimizes the levels of metallization. Advantages include good electrical properties, very high wiring density, excellent backside contact, compactness, and high thermal and mechanical reliability. 6 figs.

  15. Thin-film chip-to-substrate interconnect and methods for making same

    DOEpatents

    Tuckerman, David B.

    1991-01-01

    Integrated circuit chips are electrically connected to a silica wafer interconnection substrate. Thin film wiring is fabricated down bevelled edges of the chips. A subtractive wire fabrication method uses a series of masks and etching steps to form wires in a metal layer. An additive method direct laser writes or deposits very thin metal lines which can then be plated up to form wires. A quasi-additive or subtractive/additive method forms a pattern of trenches to expose a metal surface which can nucleate subsequent electrolytic deposition of wires. Low inductance interconnections on a 25 micron pitch (1600 wires on a 1 cm square chip) can be produced. The thin film hybrid interconnect eliminates solder joints or welds, and minimizes the levels of metallization. Advantages include good electrical properties, very high wiring density, excellent backside contact, compactness, and high thermal and mechanical reliability.

  16. COTS Ceramic Chip Capacitors: An Evaluation of the Parts and Assurance Methodologies

    NASA Technical Reports Server (NTRS)

    Brusse, Jay A.; Sampson, Michael J.

    2004-01-01

    Commercial-Off-The-Shelf (COTS) multilayer ceramic chip capacitors (MLCCs) are continually evolving to reduce physical size and increase volumetric efficiency. Designers of high reliability aerospace and military systems are attracted to these attributes of COTS MLCCs and would like to take advantage of them while maintaining the high standards for long-term reliable operation they are accustomed io when selecting military qualified established reliability (MIL-ER) MLCCs. However, MIL-ER MLCCs are not available in the full range of small chip sizes with high capacitance as found in today's COTS MLCCs. The objectives for this evaluation were to assess the long-term performance of small case size COTS MLCCs and to identify effective, lower-cost product assurance methodologies. Fifteen (15) lots of COTS X7R dielectric MLCCs from four (4) different manufacturers and two (2) MIL-ER BX dielectric MLCCs from two (2) of the same manufacturers were evaluated. Both 0805 and 0402 chip sizes were included. Several voltage ratings were tested ranging from a high of 50 volts to a low of 6.3 volts. The evaluation consisted of a comprehensive screening and qualification test program based upon MIL-PRF-55681 (i.e., voltage conditioning, thermal shock, moisture resistance, 2000-hour life test, etc.). In addition, several lot characterization tests were performed including Destructive Physical Analysis (DPA), Highly Accelerated Life Test (HALT) and Dielectric Voltage Breakdown Strength. The data analysis included a comparison of the 2000-hour life test results (used as a metric for long-term performance) relative to the screening and characterization test results. Results of this analysis indicate that the long-term life performance of COTS MLCCs is variable -- some lots perform well, some lots perform poorly. DPA and HALT were found to be promising lot characterization tests to identify substandard COTS MLCC lots prior to conducting more expensive screening and qualification tests. The results indicate that lot- specific screening and qualification are still recommended for high reliability applications. One significant and concerning observation is that MIL- type voltage conditioning (100 hours at twice rated voltage, 125 C) was not an effective screen in removing infant mortality parts for the particular lots of COTS MLCCs evaluated.

  17. Architecture for VLSI design of Reed-Solomon encoders

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1982-01-01

    A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.

  18. Architecture for VLSI design of Reed-Solomon encoders

    NASA Astrophysics Data System (ADS)

    Liu, K. Y.

    1982-02-01

    A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.

  19. 3D Printing of Organs-On-Chips

    PubMed Central

    Yi, Hee-Gyeong; Lee, Hyungseok; Cho, Dong-Woo

    2017-01-01

    Organ-on-a-chip engineering aims to create artificial living organs that mimic the complex and physiological responses of real organs, in order to test drugs by precisely manipulating the cells and their microenvironments. To achieve this, the artificial organs should to be microfabricated with an extracellular matrix (ECM) and various types of cells, and should recapitulate morphogenesis, cell differentiation, and functions according to the native organ. A promising strategy is 3D printing, which precisely controls the spatial distribution and layer-by-layer assembly of cells, ECMs, and other biomaterials. Owing to this unique advantage, integration of 3D printing into organ-on-a-chip engineering can facilitate the creation of micro-organs with heterogeneity, a desired 3D cellular arrangement, tissue-specific functions, or even cyclic movement within a microfluidic device. Moreover, fully 3D-printed organs-on-chips more easily incorporate other mechanical and electrical components with the chips, and can be commercialized via automated massive production. Herein, we discuss the recent advances and the potential of 3D cell-printing technology in engineering organs-on-chips, and provides the future perspectives of this technology to establish the highly reliable and useful drug-screening platforms. PMID:28952489

  20. 3D Printing of Organs-On-Chips.

    PubMed

    Yi, Hee-Gyeong; Lee, Hyungseok; Cho, Dong-Woo

    2017-01-25

    Organ-on-a-chip engineering aims to create artificial living organs that mimic the complex and physiological responses of real organs, in order to test drugs by precisely manipulating the cells and their microenvironments. To achieve this, the artificial organs should to be microfabricated with an extracellular matrix (ECM) and various types of cells, and should recapitulate morphogenesis, cell differentiation, and functions according to the native organ. A promising strategy is 3D printing, which precisely controls the spatial distribution and layer-by-layer assembly of cells, ECMs, and other biomaterials. Owing to this unique advantage, integration of 3D printing into organ-on-a-chip engineering can facilitate the creation of micro-organs with heterogeneity, a desired 3D cellular arrangement, tissue-specific functions, or even cyclic movement within a microfluidic device. Moreover, fully 3D-printed organs-on-chips more easily incorporate other mechanical and electrical components with the chips, and can be commercialized via automated massive production. Herein, we discuss the recent advances and the potential of 3D cell-printing technology in engineering organs-on-chips, and provides the future perspectives of this technology to establish the highly reliable and useful drug-screening platforms.

  1. Development and evaluation of a Fault-Tolerant Multiprocessor (FTMP) computer. Volume 3: FTMP test and evaluation

    NASA Technical Reports Server (NTRS)

    Lala, J. H.; Smith, T. B., III

    1983-01-01

    The experimental test and evaluation of the Fault-Tolerant Multiprocessor (FTMP) is described. Major objectives of this exercise include expanding validation envelope, building confidence in the system, revealing any weaknesses in the architectural concepts and in their execution in hardware and software, and in general, stressing the hardware and software. To this end, pin-level faults were injected into one LRU of the FTMP and the FTMP response was measured in terms of fault detection, isolation, and recovery times. A total of 21,055 stuck-at-0, stuck-at-1 and invert-signal faults were injected in the CPU, memory, bus interface circuits, Bus Guardian Units, and voters and error latches. Of these, 17,418 were detected. At least 80 percent of undetected faults are estimated to be on unused pins. The multiprocessor identified all detected faults correctly and recovered successfully in each case. Total recovery time for all faults averaged a little over one second. This can be reduced to half a second by including appropriate self-tests.

  2. Conference on Real-Time Computer Applications in Nuclear, Particle and Plasma Physics, 6th, Williamsburg, VA, May 15-19, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Pordes, Ruth (Editor)

    1989-01-01

    Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.

  3. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  4. Communication Studies of DMP and SMP Machines

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.

  5. A multiprocessor airborne lidar data system

    NASA Technical Reports Server (NTRS)

    Wright, C. W.; Bailey, S. A.; Heath, G. E.; Piazza, C. R.

    1988-01-01

    A new multiprocessor data acquisition system was developed for the existing Airborne Oceanographic Lidar (AOL). This implementation simultaneously utilizes five single board 68010 microcomputers, the UNIX system V operating system, and the real time executive VRTX. The original data acquisition system was implemented on a Hewlett Packard HP 21-MX 16 bit minicomputer using a multi-tasking real time operating system and a mixture of assembly and FORTRAN languages. The present collection of data sources produce data at widely varied rates and require varied amounts of burdensome real time processing and formatting. It was decided to replace the aging HP 21-MX minicomputer with a multiprocessor system. A new and flexible recording format was devised and implemented to accommodate the constantly changing sensor configuration. A central feature of this data system is the minimization of non-remote sensing bus traffic. Therefore, it is highly desirable that each micro be capable of functioning as much as possible on-card or via private peripherals. The bus is used primarily for the transfer of remote sensing data to or from the buffer queue.

  6. Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw

    2000-01-01

    Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.

  7. Discrete component bonding and thick film materials study. [of capacitor chips bonded with solders and conductive epoxies

    NASA Technical Reports Server (NTRS)

    Kinser, D. L.

    1976-01-01

    The bonding reliability of discrete capacitor chips bonded with solders and conductive epoxies was examined along with the thick film resistor materials consisting of iron oxide phosphate and vanadium oxide phosphates. It was concluded from the bonding reliability studies that none of the wide range of types of solders examined is capable of resisting failure during thermal cycling while the conductive epoxy gives substantially lower failure rates. The thick film resistor studies proved the feasibility of iron oxide phosphate resistor systems although some environmental sensitivity problems remain. One of these resistor compositions has inadvertently proven to be a candidate for thermistor applications because of the excellent control achieved upon the temperature coefficient of resistance. One new and potentially damaging phenomenon observed was the degradation of thick film conductors during the course of thermal cycling.

  8. Using DNA chips for identification of tephritid pest species.

    PubMed

    Chen, Yen-Hou; Liu, Lu-Yan; Tsai, Wei-Huang; Haymer, David S; Lu, Kuang-Hui

    2014-08-01

    The ability correctly to identify species in a rapid and reliable manner is critical in many situations. For insects in particular, the primary tools for such identification rely on adult-stage morphological characters. For a number of reasons, however, there is a clear need for alternatives. This paper reports on the development of a new method employing DNA biochip technology for the identification of pest species within the family Tephritidae. The DNA biochip developed and tested here quickly and efficiently identifies and discriminates between several tephritid species, except for some that are members of a complex of closely related taxa and that may in fact not represent distinct biological species. The use of these chips offers a number of potential advantages over current methods. Results can be obtained in less than 5 h using material from any stage of the life cycle and with greater sensitivity than other methods currently available. This technology provides a novel tool for the rapid and reliable identification of several major pest species that may be intercepted in imported fruits or other commodities. The existing chips can also easily be expanded to incorporate additional markers and species as needed. © 2013 Society of Chemical Industry.

  9. Multitasking kernel for the C and Fortran programming languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, E.D. III

    1984-09-01

    A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.

  10. Distributed parallel messaging for multiprocessor systems

    DOEpatents

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  11. Programming parallel architectures: The BLAZE family of languages

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1988-01-01

    Programming multiprocessor architectures is a critical research issue. An overview is given of the various approaches to programming these architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive since they remove much of the burden of exploiting parallel architectures from the user. Also described is recent work by the author in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described, as well as the relations of this work to other current language research projects.

  12. Automatic Data Partitioning on Distributed Memory Multiprocessors

    DTIC Science & Technology

    1990-10-01

    DISTRIBUTED MEMORY MULTIPROCESSORS D 1"’ 1 C . Manish Gupta NOV,1 41990.NOV 1 41990m Prithviraj Banerjee D Coordinated Science Laboratory College of...developed on the partitioning of arrays can as well be applied to other programming languages, such as C . 3 The rest of this paper is organized as follows...value 1, as in Fortran. a) N= 4, N 2 = 1: f(i) = J,(j) = 03 b) =Ni 1, N 2 =4: fA(i) =, f() - c ) NI 2, X) 2: f()=[., f2(j) = [L-.j d) N 1 , N 2 =4: fA(i

  13. Gold patterned biochips for on-chip immuno-MALDI-TOF MS: SPR imaging coupled multi-protein MS analysis.

    PubMed

    Kim, Young Eun; Yi, So Yeon; Lee, Chang-Soo; Jung, Yongwon; Chung, Bong Hyun

    2012-01-21

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) analysis of immuno-captured target protein efficiently complements conventional immunoassays by offering rich molecular information such as protein isoforms or modifications. Direct immobilization of antibodies on MALDI solid support enables both target enrichment and MS analysis on the same plate, allowing simplified and potentially multiplexing protein MS analysis. Reliable on-chip immuno-MALDI-TOF MS for multiple biomarkers requires successful adaptation of antibody array biochips, which also must accommodate consistent reaction conditions on antibody arrays during immuno-capture and MS analysis. Here we developed a facile fabrication process of versatile antibody array biochips for reliable on-chip MALDI-TOF-MS analysis of multiple immuno-captured proteins. Hydrophilic gold arrays surrounded by super-hydrophobic surfaces were formed on a gold patterned biochip via spontaneous chemical or protein layer deposition. From antibody immobilization to MALDI matrix treatment, this hydrophilic/phobic pattern allowed highly consistent surface reactions on each gold spot. Various antibodies were immobilized on these gold spots both by covalent coupling or protein G binding. Four different protein markers were successfully analyzed on the present immuno-MALDI biochip from complex protein mixtures including serum samples. Tryptic digests of captured PSA protein were also effectively detected by on-chip MALDI-TOF-MS. Moreover, the present MALDI biochip can be directly applied to the SPR imaging system, by which antibody and subsequent antigen immobilization were successfully monitored.

  14. High reliability level on single-mode 980nm-1060 nm diode lasers for telecommunication and industrial applications

    NASA Astrophysics Data System (ADS)

    Van de Casteele, J.; Bettiati, M.; Laruelle, F.; Cargemel, V.; Pagnod-Rossiaux, P.; Garabedian, P.; Raymond, L.; Laffitte, D.; Fromy, S.; Chambonnet, D.; Hirtz, J. P.

    2008-02-01

    We demonstrate very high reliability level on 980-1060nm high-power single-mode lasers through multi-cell tests. First, we show how our chip design and technology enables high reliability levels. Then, we aged 758 devices during 9500 hours among 6 cells with high current (0.8A-1.2A) and high submount temperature (65°C-105°C) for the reliability demonstration. Sudden catastrophic failure is the main degradation mechanism observed. A statistical failure rate model gives an Arrhenius thermal activation energy of 0.51eV and a power law forward current acceleration factor of 5.9. For high-power submarine applications (360mW pump module output optical power), this model exhibits a failure rate as low as 9 FIT at 13°C, while ultra-high power terrestrial modules (600mW) lie below 220 FIT at 25°C. Wear-out phenomena is observed only for very high current level without any reliability impact under 1.1A. For the 1060nm chip, step-stress tests were performed and a set of devices were aged during more than 2000 hours in different stress conditions. First results are in accordance with 980nm product with more than 100khours estimated MTTF. These reliability and performance features of 980-1060nm laser diodes will make high-power single-mode emitters the best choice for a number of telecommunication and industrial applications in the next few years.

  15. Characterizing Rat PNS Electrophysiological Response to Electrical Stimulation Using in vitro Chip-Based Human Investigational Platform (iCHIP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khani, Joshua; Prescod, Lindsay; Enright, Heather

    Ex vivo systems and organ-on-a-chip technology offer an unprecedented approach to modeling the inner workings of the human body. The ultimate goal of LLNL’s in vitro Chip-based Human Investigational Platform (iCHIP) is to integrate multiple organ tissue cultures using microfluidic channels, multi-electrode arrays (MEA), and other biosensors in order to effectively simulate and study the responses and interactions of the major organs to chemical and physical stimulation. In this study, we focused on the peripheral nervous system (PNS) component of the iCHIP system. Specifically we sought to expound on prior research investigating the electrophysiological response of rat dorsal root ganglionmore » cells (rDRGs) to chemical exposures, such as capsaicin. Our aim was to establish a protocol for electrical stimulation using the iCHIP device that would reliably elicit a characteristic response in rDRGs. By varying the parameters for both the stimulation properties – amplitude, phase width, phase shape, and stimulation/ return configuration – and the culture conditions – day in vitro and neural cell types - we were able to make several key observations and uncover a potential convention with a minimal number of devices tested. Future work will seek to establish a standard protocol for human DRGs in the iCHIP which will afford a portable, rapid method for determining the effects of toxins and novel therapeutics on the PNS.« less

  16. CE chips fabricated by injection molding and polyethylene/thermoplastic elastomer film packaging methods.

    PubMed

    Huang, Fu-Chun; Chen, Yih-Far; Lee, Gwo-Bin

    2007-04-01

    This study presents a new packaging method using a polyethylene/thermoplastic elastomer (PE/TPE) film to seal an injection-molded CE chip made of either poly(methyl methacrylate) (PMMA) or polycarbonate (PC) materials. The packaging is performed at atmospheric pressure and at room temperature, which is a fast, easy, and reliable bonding method to form a sealed CE chip for chemical analysis and biomedical applications. The fabrication of PMMA and PC microfluidic channels is accomplished by using an injection-molding process, which could be mass-produced for commercial applications. In addition to microfluidic CE channels, 3-D reservoirs for storing biosamples, and CE buffers are also formed during this injection-molding process. With this approach, a commercial CE chip can be of low cost and disposable. Finally, the functionality of the mass-produced CE chip is demonstrated through its successful separation of phiX174 DNA/HaeIII markers. Experimental data show that the S/N for the CE chips using the PE/TPE film has a value of 5.34, when utilizing DNA markers with a concentration of 2 ng/microL and a CE buffer of 2% hydroxypropyl-methylcellulose (HPMC) in Tris-borate-EDTA (TBE) with 1% YO-PRO-1 fluorescent dye. Thus, the detection limit of the developed chips is improved. Lastly, the developed CE chips are used for the separation and detection of PCR products. A mixture of an amplified antibiotic gene for Streptococcus pneumoniae and phiX174 DNA/HaeIII markers was successfully separated and detected by using the proposed CE chips. Experimental data show that these DNA samples were separated within 2 min. The study proposed a promising method for the development of mass-produced CE chips.

  17. Accelerated Thermal Cycling and Failure Mechanisms

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    1999-01-01

    This paper reviews the accelerated thermal cycling test methods that are currently used by industry to characterize the interconnect reliability of commercial-off-the-shelf (COTS) ball grid array (BGA) and chip scale package (CSP) assemblies.

  18. Thriving on Chaos: The Development of a Surgical Information System

    PubMed Central

    Olund, Steven R.

    1988-01-01

    Hospitals present unique challenges to the computer industry, generating a greater quantity and variety of data than nearly any other enterprise. This is complicated by the fact that a hospital is not one homogenous organization, but a bundle of semi-independent groups with unique data requirements. Therefore hospital information systems must be fast, flexible, reliable, easy to use and maintain, and cost-effective. The Surgical Information System at Rush Presbyterian-St. Luke's Medical Center, Chicago is such system. It uses a Sequent Balance 21000 multi-processor superminicomputer, running industry standard tools such as the Unix operating system, a 4th generation programming language (4GL), and Structured Query Language (SQL) relational database management software. This treatise illustrates a comprehensive yet generic approach which can be applied to almost any clinical situation where access to patient data is required by a variety of medical professionals.

  19. Real-Time Considerations for Rugged Embedded Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Ceriani, Marco; Palermo, Gianluca

    This chapter introduces the characterizing aspects of embedded systems, and discusses the specific features that a designer should address to an embedded system “rugged”, i.e., able to operate reliably in harsh environments. The chapter addresses both the hardware and the less obvious software aspect. After presenting a current list of certifications for ruggedization, the chapters present a case study that focuses on the interaction of the hardware and software layers in reactive real-time system. In particular, it shows how the use of fast FPGA prototyping could provide insights on unexpected factors that influence the performance and thus responsiveness to eventsmore » of a scheduling algorithm for multiprocessor systems that manages both periodic, hard real-time task, and aperiodic tasks. The main lesson is that to make the system “rugged”, a designer should consider these issues by, for example, overprovisioning resources and/or computation capabilities.« less

  20. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  1. Optofluidic two-dimensional grating volume refractive index sensor.

    PubMed

    Sarkar, Anirban; Shivakiran Bhaktha, B N; Khastgir, Sugata Pratik

    2016-09-10

    We present an optofluidic reservoir with a two-dimensional grating for a lab-on-a-chip volume refractive index sensor. The observed diffraction pattern from the device resembles the analytically obtained fringe pattern. The change in the diffraction pattern has been monitored in the far-field for fluids with different refractive indices. Reliable measurements of refractive index variations, with an accuracy of 6×10-3 refractive index units, for different fluids establishes the optofluidic device as a potential on-chip tool for monitoring dynamic refractive index changes.

  2. Reliability of hybrid microcircuit discrete components

    NASA Technical Reports Server (NTRS)

    Allen, R. V.

    1972-01-01

    Data accumulated during 4 years of research and evaluation of ceramic chip capacitors, ceramic carrier mounted active devices, beam-lead transistors, and chip resistors are presented. Life and temperature coefficient test data, and optical and scanning electron microscope photographs of device failures are presented and the failure modes are described. Particular interest is given to discrete component qualification, power burn-in, and procedures for testing and screening discrete components. Burn-in requirements and test data will be given in support of 100 percent burn-in policy on all NASA flight programs.

  3. Burst Mode ASIC-Based Modem

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The NASA Lewis Research Center is sponsoring the Advanced Communication Technology Insertion (ACTION) for Commercial Space Applications program. The goal of the program is to expedite the development of new technology with a clear path towards productization and enhancing the competitiveness of U.S. manufacturers. The industry has made significant investment in developing ASIC-based modem technology for continuous-mode applications and has made investigations into East, reliable acquisition of burst-mode digital communication signals. With rapid advances in analog and digital communications ICs, it is expected that more functions will be integrated onto these parts in the near future. In addition custom ASIC's can also be developed to address the areas not covered by the other IC's. Using the commercial chips and custom ASIC's, lower-cost, compact, reliable, and high-performance modems can be built for demanding satellite communication application. This report outlines a frequency-hop burst modem design based on commercially available chips.

  4. Smart substrates: Making multi-chip modules smarter

    NASA Astrophysics Data System (ADS)

    Wunsch, T. F.; Treece, R. K.

    1995-05-01

    A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.

  5. Development of chip passivated monolithic complementary MISFET circuits with beam leads

    NASA Technical Reports Server (NTRS)

    Ragonese, L. J.; Kim, M. J.; Corrie, B. L.; Brouillette, J. W.; Warr, R. E.

    1972-01-01

    The results are presented of a program to demonstrate the processes for fabricating complementary MISFET beam-leaded circuits, which, potentially, are comparable in quality to available bipolar beam-lead chips that use silicon nitride passivation in conjunction with a platinum-titanium-gold metal system. Materials and techniques, different from the bipolar case, were used in order to be more compatible with the special requirements of fully passivated complementary MISFET devices. Two types of circuits were designed and fabricated, a D-flip-flop and a three-input NOR/NAND gate. Fifty beam-leaded chips of each type were constructed. A quality and reliability assurance program was performed to identify failure mechanisms. Sample tests and inspections (including destructive) were developed to measure the physical characteristics of the circuits.

  6. Deciding the liveness for a subclass of weighted Petri nets based on structurally circular wait

    NASA Astrophysics Data System (ADS)

    Liu, GuanJun; Chen, LiJing

    2016-05-01

    Weighted Petri nets as a kind of formal language are widely used to model and verify discrete event systems related to resource allocation like flexible manufacturing systems. System of Simple Sequential Processes with Multi-Resources (S3PMR, a subclass of weighted Petri nets and an important extension to the well-known System of Simple Sequential Processes with Resources, can model many discrete event systems in which (1) multiple processes may run in parallel and (2) each execution step of each process may use multiple units from multiple resource types. This paper gives a necessary and sufficient condition for the liveness of S3PMR. A new structural concept called Structurally Circular Wait (SCW) is proposed for S3PMR. Blocking Marking (BM) associated with an SCW is defined. It is proven that a marked S3PMR is live if and only if each SCW has no BM. We use an example of multi-processor system-on-chip to show that SCW and BM can precisely characterise the (partial) deadlocks for S3PMR. Simultaneously, two examples are used to show the advantages of SCW in preventing deadlocks of S3PMR. These results are significant for the further research on dealing with the deadlock problem.

  7. Low energy CMOS for space applications

    NASA Technical Reports Server (NTRS)

    Panwar, Ramesh; Alkalaj, Leon

    1992-01-01

    The current focus of NASA's space flight programs reflects a new thrust towards smaller, less costly, and more frequent space missions, when compared to missions such as Galileo, Magellan, or Cassini. Recently, the concept of a microspacecraft was proposed. In this concept, a small, compact spacecraft that weighs tens of kilograms performs focused scientific objectives such as imaging. Similarly, a Mars Lander micro-rover project is under study that will allow miniature robots weighing less than seven kilograms to explore the Martian surface. To bring the microspacecraft and microrover ideas to fruition, one will have to leverage compact 3D multi-chip module-based multiprocessors (MCM) technologies. Low energy CMOS will become increasingly important because of the thermodynamic considerations in cooling compact 3D MCM implementations and also from considerations of the power budget for space applications. In this paper, we show how the operating voltage is related to the threshold voltage of the CMOS transistors for accomplishing a task in VLSI with minimal energy. We also derive expressions for the noise margins at the optimal operating point. We then look at a low voltage CMOS (LVCMOS) technology developed at Stanford University which improves the power consumption over conventional CMOS by a couple of orders of magnitude and consider the suitability of the technology for space applications by characterizing its SEU immunity.

  8. On-chip free beam optics on a polymer-based photonic integration platform.

    PubMed

    Happach, M; de Felipe, D; Conradi, H; Friedhoff, V N; Schwartz, E; Kleinert, M; Brinker, W; Zawadzki, C; Keil, N; Hofmann, W; Schell, M

    2017-10-30

    This paper presents on-chip free beam optics on polymer-based photonic components. Due to the circumstance that waveguide-based optics allows no direct beam access we use Gradient index (GRIN) lenses assembled into the chip to collimate the beam from the waveguides. This enables low loss power transmission over a length of 1432 µm. Even though the beam propagates through air it is possible to create a resonator with a wavelength shift of 0.002 nm/°C, hence the allowed deviations from the ITU-T grid (100 GHz) are met for ± 20 °C. In order to guarantee reliable laser stability, it is necessary to implement optical isolators at the output of the laser. This requires the insertion of bulk material into the chip and is realized by a 1050 µm thick coated glass. Due to the large gap of the free-space section, it is possible to combine different resonators together. This demonstrates the feasibility of an integrated wavelength-meter.

  9. Fabrication of a microfluidic Ag/AgCl reference electrode and its application for portable and disposable electrochemical microchips.

    PubMed

    Zhou, Jianhua; Ren, Kangning; Zheng, Yizhe; Su, Jing; Zhao, Yihua; Ryan, Declan; Wu, Hongkai

    2010-09-01

    This report describes a convenient method for the fabrication of a miniaturized, reliable Ag/AgCl reference electrode with nanofluidic channels acting as a salt bridge that can be easily integrated into microfluidic chips. The Ag/AgCl reference electrode shows high stability with millivolt variations. We demonstrated the application of this reference electrode in a portable microfluidic chip that is connected to a USB-port microelectrochemical station and to a computer for data collection and analysis. The low fabrication cost of the chip with the potential for mass production makes it disposable and an excellent candidate for real-world analysis and measurement. We used the chip to quantitatively analyze the concentrations of heavy metal ions (Cd(2+) and Pb(2+)) in sea water. We believe that the Ag/AgCl reference microelectrode and the portable electrochemical system will be of interest to people in microfluidics, environmental science, clinical diagnostics, and food research.

  10. A paramagnetic implant containing lithium naphthalocyanine microcrystals for high-resolution biological oximetry

    PubMed Central

    Meenakshisundaram, Guruguhan; Pandian, Ramasamy P.; Eteshola, Edward; Lee, Stephen C.; Kuppusamy, Periannan

    2009-01-01

    Lithium naphthalocyanine (LiNc) is a microcrystalline EPR oximetry probe with high sensitivity to oxygen (Pandian et al. J. Mater. Chem., 19, 4138, 2009). However, direct implantation of the crystals in the tissue for in vivo oxygen measurements may be hindered by concerns associated with their direct contact with the tissue/cells and loss of EPR signal due to particle migration in the tissue. In order to address these concerns, we have developed encapsulations (chips) of LiNc microcrystals in polydimethyl siloxane (PDMS), an oxygen-permeable, bioinert polymer. Oximetry evaluation of the fabricated chips revealed that the oxygen sensitivity of the crystals was unaffected by encapsulation in PDMS. Chips were stable against sterilization procedures or treatment with common biological oxidoreductants. In vivo oxygen measurements established the ability of the chips to provide reliable and repeated measurements of tissue oxygenation. This study establishes PDMS-encapsulated LiNc as a potential probe for long-term and repeated measurements of tissue oxygenation. PMID:20006529

  11. Multiprocessor and memory architecture of the neurocomputer SYNAPSE-1.

    PubMed

    Ramacher, U; Raab, W; Anlauf, J; Hachmann, U; Beichter, J; Brüls, N; Wesseling, M; Sicheneder, E; Männer, R; Glass, J

    1993-12-01

    A general purpose neurocomputer, SYNAPSE-1, which exhibits a multiprocessor and memory architecture is presented. It offers wide flexibility with respect to neural algorithms and a speed-up factor of several orders of magnitude--including learning. The computational power is provided by a 2-dimensional systolic array of neural signal processors. Since the weights are stored outside these NSPs, memory size and processing power can be adapted individually to the application needs. A neural algorithms programming language, embedded in C(+2) has been defined for the user to cope with the neurocomputer. In a benchmark test, the prototype of SYNAPSE-1 was 8000 times as fast as a standard workstation.

  12. An efficient 3-dim FFT for plane wave electronic structure calculations on massively parallel machines composed of multiprocessor nodes

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry

    2003-08-01

    Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.

  13. Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Lesoinne, Michel

    1993-01-01

    Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.

  14. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  15. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  16. Process Management and Exception Handling in Multiprocessor Operating Systems Using Object-Oriented Design Techniques. Revised Sep. 1988

    NASA Technical Reports Server (NTRS)

    Russo, Vincent; Johnston, Gary; Campbell, Roy

    1988-01-01

    The programming of the interrupt handling mechanisms, process switching primitives, scheduling mechanism, and synchronization primitives of an operating system for a multiprocessor require both efficient code in order to support the needs of high- performance or real-time applications and careful organization to facilitate maintenance. Although many advantages have been claimed for object-oriented class hierarchical languages and their corresponding design methodologies, the application of these techniques to the design of the primitives within an operating system has not been widely demonstrated. To investigate the role of class hierarchical design in systems programming, the authors have constructed the Choices multiprocessor operating system architecture the C++ programming language. During the implementation, it was found that many operating system design concerns can be represented advantageously using a class hierarchical approach, including: the separation of mechanism and policy; the organization of an operating system into layers, each of which represents an abstract machine; and the notions of process and exception management. In this paper, we discuss an implementation of the low-level primitives of this system and outline the strategy by which we developed our solution.

  17. Method for prefetching non-contiguous data structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  18. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  19. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  20. Rapid qualification of CSP assemblies by increase of ramp rates and cycling temperature ranges

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.; Kim, N.; Rose, D.; Hunter, B.; Devitt, K.; Long, T.

    2001-01-01

    Team members representing government agencies and private companies have joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.

  1. A novel readout integrated circuit for ferroelectric FPA detector

    NASA Astrophysics Data System (ADS)

    Bai, Piji; Li, Lihua; Ji, Yulong; Zhang, Jia; Li, Min; Liang, Yan; Hu, Yanbo; Li, Songying

    2017-11-01

    Uncooled infrared detectors haves some advantages such as low cost light weight low power consumption, and superior reliability, compared with cryogenically cooled ones Ferroelectric uncooled focal plane array(FPA) are being developed for its AC response and its high reliability As a key part of the ferroelectric assembly the ROIC determines the performance of the assembly. A top-down design model for uncooled ferroelectric readout integrated circuit(ROIC) has been developed. Based on the optical thermal and electrical properties of the ferroelectric detector the RTIA readout integrated circuit is designed. The noise bandwidth of RTIA readout circuit has been developed and analyzed. A novel high gain amplifier, a high pass filter and a low pass filter circuits are designed on the ROIC. In order to improve the ferroelectric FPA package performance and decrease of package cost a temperature sensor is designed on the ROIC chip At last the novel RTIA ROIC is implemented on 0.6μm 2P3M CMOS silicon techniques. According to the experimental chip test results the temporal root mean square(RMS)noise voltage is about 1.4mV the sensitivity of the on chip temperature sensor is 0.6 mV/K from -40°C to 60°C the linearity performance of the ROIC chip is better than 99% Based on the 320×240 RTIA ROIC, a 320×240 infrared ferroelectric FPA is fabricated and tested. Test results shows that the 320×240 RTIA ROIC meets the demand of infrared ferroelectric FPA.

  2. A new statistical methodology predicting chip failure probability considering electromigration

    NASA Astrophysics Data System (ADS)

    Sun, Ted

    In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.

  3. Rapid and reliable QuEChERS-based LC-MS/MS method for determination of acrylamide in potato chips and roasted coffee

    NASA Astrophysics Data System (ADS)

    Stefanović, S.; Đorđevic, V.; Jelušić, V.

    2017-09-01

    The aim of this paper is to verify the performance characteristics and fitness for purpose of rapid and simple QuEChERS-based LC-MS/MS method for determination of acrylamide in potato chips and coffee. LC-MS/MS is by far the most suitable analytical technique for acrylamide measurements given its inherent sensitivity and selectivity, as well as capability of analyzing underivatized molecule. Acrylamide in roasted coffee and potato chips wasextracted with water:acetonitrile mixture using NaCl and MgSO4. Cleanup was carried out with MgSO4 and PSA. Obtained results were satisfactory. Recoveries were in the range of 85-112%, interlaboratory reproducibility (Cv) was 5.8-7.6% and linearity (R2) was in the range of 0.995-0.999. LoQ was 35 μg kg-1 for coffee and 20 μg kg-1 for potato chips. Performance characteristic of the method are compliant with criteria for analytical methods validation. Presented method for quantitative determination of acrylamide in roasted coffee and potato chips is fit for purposes of self-control in food industry as well as regulatory controls carried out by the governmental agencies.

  4. Reliability Analysis/Assessment of Advanced Technologies

    DTIC Science & Technology

    1990-05-01

    34, Reliability Physics 1980 , IEEE, p. 165. 25. RADC-TR-83-244. 26. Towner, Janet M., et. al., "Aluminum Electromigration Under Pulsed D.C. Conditions...Duvvury, Redwine, Kitagawa, Haas, Chuang, Beydler, Hyslop , "Impact of Hot Carriers On DRAM circuits", 1987 IEEE/IRPS. 58. Cahoon, Thornewell, Tsai...et. a]., "Substrate for Large Silicon Chip and Full Wafer Packaging", Semiconductor International, pp. 149-156, April 1980 . 5. T.E. Lewis and D.L

  5. Analyzing System on A Chip Single Event Upset Responses using Single Event Upset Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)

  6. Single-use thermoplastic microfluidic burst valves enabling on-chip reagent storage

    PubMed Central

    Rahmanian, Omid D.

    2014-01-01

    A simple and reliable method for fabricating single-use normally closed burst valves in thermoplastic microfluidic devices is presented, using a process flow that is readily integrated into established workflows for the fabrication of thermoplastic microfluidics. An experimental study of valve performance reveals the relationships between valve geometry and burst pressure. The technology is demonstrated in a device employing multiple valves engineered to actuate at different inlet pressures that can be generated using integrated screw pumps. On-chip storage and reconstitution of fluorescein salt sealed within defined reagent chambers are demonstrated. By taking advantage of the low gas and water permeability of cyclic olefin copolymer, the robust burst valves allow on-chip hermetic storage of reagents, making the technology well suited for the development of integrated and disposable assays for use at the point of care. PMID:25972774

  7. Evaluation of advanced microelectronic fluxless solder-bump contacts for hybrid microcircuits

    NASA Technical Reports Server (NTRS)

    Mandal, R. P.

    1976-01-01

    Technology for interconnecting monolithic integrated circuit chips with other components is investigated. The advantages and disadvantages of the current flip-chip approach as compared to other interconnection methods are outlined. A fluxless solder-bump contact technology is evaluated. Multiple solder-bump contacts were formed on silicon integrated circuit chips. The solder-bumps, comprised of a rigid nickel under layer and a compliant solder overlayer, were electroformed onto gold device pads with the aid of thick dry film photomasks. Different solder alloys and the use of conductive epoxy for bonding were explored. Fluxless solder-bump bond quality and reliability were evaluated by measuring the effects of centrifuge, thermal cycling, and high temperature storage on bond visual characteristics, bond electrical continuity, and bond shear tests. The applicability and suitability of this technology for hybrid microelectronic packaging is discussed.

  8. Potentiometric chip-based multipumping flow system for the simultaneous determination of fluoride, chloride, pH, and redox potential in water samples.

    PubMed

    Chango, Gabriela; Palacio, Edwin; Cerdà, Víctor

    2018-08-15

    A simple potentiometric chip-based multipumping flow system (MPFS) has been developed for the simultaneous determination of fluoride, chloride, pH, and redox potential in water samples. The proposed system was developed by using a poly(methyl methacrylate) chip microfluidic-conductor using the advantages of flow techniques with potentiometric detection. For this purpose, an automatic system has been designed and built by optimizing the variables involved in the process, such as: pH, ionic strength, stirring and sample volume. This system was applied successfully to water samples getting a versatile system with an analysis frequency of 12 samples per hour. Good correlation between chloride and fluoride concentration measured with ISE and ionic chromatography technique suggests satisfactory reliability of the system. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Electromigration in solder joints and solder lines

    NASA Astrophysics Data System (ADS)

    Gan, H.; Choi, W. J.; Xu, G.; Tu, K. N.

    2002-06-01

    Electromigration may affect the reliability of flip-chip solder joints. Eutectic solder is a two-phase alloy, so its electromigration behavior is different from that in aluminum or copper interconnects. In addition, a flipchip solder joint has a built-in currentcrowding configuration to enhance electromigration failure. To better understand electromigration in SnPb and lead-free solder alloys, the authors prepared solder lines in v-grooves etched on Si (001). This article discusses the results of those tests and compares the electromigration failure modes of eutectic SnPb and SnAgCu flip-chip solder joints along with the mean-timeto-failure.

  10. Enzymic colorimetry-based DNA chip: a rapid and accurate assay for detecting mutations for clarithromycin resistance in the 23S rRNA gene of Helicobacter pylori.

    PubMed

    Xuan, Shi-Hai; Zhou, Yu-Gui; Shao, Bo; Cui, Ya-Lin; Li, Jian; Yin, Hong-Bo; Song, Xiao-Ping; Cong, Hui; Jing, Feng-Xiang; Jin, Qing-Hui; Wang, Hui-Min; Zhou, Jie

    2009-11-01

    Macrolide drugs, such as clarithromycin (CAM), are a key component of many combination therapies used to eradicate Helicobacter pylori. However, resistance to CAM is increasing in H. pylori and is becoming a serious problem in H. pylori eradication therapy. CAM resistance in H. pylori is mostly due to point mutations (A2142G/C, A2143G) in the peptidyltransferase-encoding region of the 23S rRNA gene. In this study an enzymic colorimetry-based DNA chip was developed to analyse single-nucleotide polymorphisms of the 23S rRNA gene to determine the prevalence of mutations in CAM-related resistance in H. pylori-positive patients. The results of the colorimetric DNA chip were confirmed by direct DNA sequencing. In 63 samples, the incidence of the A2143G mutation was 17.46 % (11/63). The results of the colorimetric DNA chip were concordant with DNA sequencing in 96.83 % of results (61/63). The colorimetric DNA chip could detect wild-type and mutant signals at every site, even at a DNA concentration of 1.53 x 10(2) copies microl(-1). Thus, the colorimetric DNA chip is a reliable assay for rapid and accurate detection of mutations in the 23S rRNA gene of H. pylori that lead to CAM-related resistance, directly from gastric tissues.

  11. Amine coupling versus biotin capture for the assessment of sulfonamide as ligands of hCA isoforms.

    PubMed

    Rogez-Florent, Tiphaine; Goossens, Laurence; Drucbert, Anne-Sophie; Duban-Deweer, Sophie; Six, Perrine; Depreux, Patrick; Danzé, Pierre-Marie; Goossens, Jean-François; Foulon, Catherine

    2016-10-15

    This work was dedicated to the development of a reliable SPR method allowing the simultaneous and quick determination of the affinity and selectivity of designed sulfonamide derivatives for hCAIX and hCAXII versus hCAII, in order to provide an efficient tool to discover drugs for anticancer therapy of solid tumors. We performed for the first time a comparison of two immobilization approaches of hCA isoforms. First one relies on the use of an amine coupling strategy, using a CM7 chip to obtain higher immobilization levels than with a CM5 chip and consequently the affinity with an higher precision (CV% < 10%). The second corresponds to a capture of proteins on a streptavidin chip, named CAP chip, after optimization of biotinylation conditions (amine versus carboxyl coupling, biotin to protein ratio). Thanks to the amine coupling approach, only hCAII and hCAXII isoforms were efficiently biotinylated to reach relevant immobilization (3000 RU and 2700 RU, respectively) to perform affinity studies. For hCAIX, despite a successful biotinylation, capture on the CAP chip was a failure. Finally, concordance between affinities obtained for the three derivatives to CAs isozymes on both chips has allowed to valid the approaches for a further screening of new derivatives. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Performance issues for domain-oriented time-driven distributed simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1987-01-01

    It has long been recognized that simulations form an interesting and important class of computations that may benefit from distributed or parallel processing. Since the point of parallel processing is improved performance, the recent proliferation of multiprocessors requires that we consider the performance issues that naturally arise when attempting to implement a distributed simulation. Three such issues are: (1) the problem of mapping the simulation onto the architecture, (2) the possibilities for performing redundant computation in order to reduce communication, and (3) the avoidance of deadlock due to distributed contention for message-buffer space. These issues are discussed in the context of a battlefield simulation implemented on a medium-scale multiprocessor message-passing architecture.

  13. MPF: A portable message passing facility for shared memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.; Mcguire, Patrick J.

    1987-01-01

    The design, implementation, and performance evaluation of a message passing facility (MPF) for shared memory multiprocessors are presented. The MPF is based on a message passing model conceptually similar to conversations. Participants (parallel processors) can enter or leave a conversation at any time. The message passing primitives for this model are implemented as a portable library of C function calls. The MPF is currently operational on a Sequent Balance 21000, and several parallel applications were developed and tested. Several simple benchmark programs are presented to establish interprocess communication performance for common patterns of interprocess communication. Finally, performance figures are presented for two parallel applications, linear systems solution, and iterative solution of partial differential equations.

  14. Partitioning and packing mathematical simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.; Milner, E. J.

    1986-01-01

    The development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system is described. Degrees of parallelism (i.e., coupling between the equations) and their impact on parallel processing are discussed. The problem of identifying computational parallelism within sets of closely coupled equations that require the exchange of current values of variables is described. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. An algorithm which packs the equations into a minimum number of processors is also described. The results of the packing algorithm when applied to a turbojet engine model are presented in terms of processor utilization.

  15. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  16. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  17. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  18. Advanced Initiation Systems Manufacturing Level 2 Milestone Completion Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, R; Schmidt, M

    2009-10-01

    Milestone Description - Advanced Initiation Systems Detonator Design and Prototype. Milestone Grading Criteria - Design new generation chip slapper detonator and manufacture a prototype using advanced manufacturing processes, such as all-dry chip metallization and solvent-less flyer coatings. The advanced processes have been developed for manufacturing detonators with high material compatibility and reliability to support future LEPs, e.g. the B61, and new weapons systems. Perform velocimetry measurements to determine slapper velocity as a function of flight distance. A prototype detonator assembly and stripline was designed for low-energy chip slappers. Pictures of the prototype detonator and stripline are shown. All-dry manufacturing processesmore » were used to address compatibility issues. KCP metallized the chips in a physical vapor deposition system through precision-aligned shadow masks. LLNL deposited a solvent-less polyimide flyer with a processes called SLIP, which stands for solvent-less vapor deposition followed by in-situ polymerization. LANL manufactured the high-surface-area (HSA) high explosive (HE) pellets. Test fires of two chip slapper designs, radius and bowtie, were performed at LLNL in the High Explosives Application Facility (HEAF). Test fires with HE were conducted to establish the threshold firing voltages. pictures of the chip slappers before and after test fires are shown. Velocimetry tests were then performed to obtain slapper velocities at or above the threshold firing voltages. Figure 5 shows the slapper velocity as a function of distance and time at the threshold voltage, for both radius and bowtie bridge designs. Both designs were successful at initiating the HE at low energy levels. Summary of Accomplishments are: (1) All-dry process for chip manufacture developed; (2) Solventless process for slapper materials developed; (3) High-surface area explosive pellets developed; (4) High performance chip slappers developed; (5) Low-energy chip slapper detonator designs; and (6) Low-voltage threshold chip slapper detonator demonstrated.« less

  19. Healing of voids in the aluminum metallization of integrated circuit chips

    NASA Technical Reports Server (NTRS)

    Cuddihy, Edward F.; Lawton, Russell A.; Gavin, Thomas R.

    1990-01-01

    The thermal stability of GaAs modulation-doped field effect transistors (MODFETs) is evaluated in order to identify failure mechanisms and validate the reliability of these devices. The transistors were exposed to thermal step-stress and characterized at ambient temperatures to indicate device reliability, especially that of the transistor ohmic contacts with and without molybdenum diffusion barriers. The devices without molybdenum exhibited important transconductance deterioration. MODFETs with molybdenum diffusion barriers were tolerant to temperatures above 300 C. This tolerance indicates that thermally activated failure mechanisms are slow at operational temperatures. Therefore, high-reliability MODFET-based circuits are possible.

  20. Instrumentation, performance visualization, and debugging tools for multiprocessors

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.

    1991-01-01

    The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.

  1. Efficient partitioning and assignment on programs for multiprocessor execution

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1993-01-01

    The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.

  2. Design of a high-efficiency train headlamp with low power consumption using dual half-parabolic aluminized reflectors.

    PubMed

    Liang, Wei-Lun; Su, Guo-Dung J

    2018-02-20

    We propose a train headlamp system using dual half-circular parabolic aluminized reflectors. Each half-circular reflector contains five high-efficiency and small-package light-emitting diode (LED) chips, and the halves are 180° rotationally symmetric. For traffic safety, the headlamp satisfies the Code of Federal Regulations. To predict the pattern of illumination, an analytical derivation is developed for the optical path of a ray that is perpendicular to and emitted from the center of an LED chip. This ray represents the main ray emitted from the LED chip and is located at the maximum illuminance of the spot projected by the LED source onto a screen. We then analyze the design systematically to determine the locations of the LED chips in the reflector that minimize electricity consumption while satisfying reliability constraints associated with traffic safety. Compared to a typical train headlamp system with an incandescent or halogen lamp needing several hundred watts, the proposed system only uses 20.18 W to achieve the luminous intensity requirements.

  3. Portable and Reliable Surface-Enhanced Raman Scattering Silicon Chip for Signal-On Detection of Trace Trinitrotoluene Explosive in Real Systems.

    PubMed

    Chen, Na; Ding, Pan; Shi, Yu; Jin, Tengyu; Su, Yuanyuan; Wang, Houyu; He, Yao

    2017-05-02

    There is an increasing interest in the development of surface-enhanced Raman scattering (SERS) sensors for rapid and accurate on-site detection of hidden explosives. However, portable SERS methods for trace explosive detection in real systems remain scarce, mainly due to their relatively poor reliability and portability. Herein, we present the first demonstration of a portable silicon-based SERS analytical platform for signal-on detection of trace trinitrotoluene (TNT) explosives, which is made of silver nanoparticle (AgNP)-decorated silicon wafer chip (0.5 cm × 0.5 cm). In principle, under 514 nm excitation, the Raman signals of p-aminobenzenethiol (PABT) modified on the AgNP surface could be largely lit up due to the formation of electronic resonance-active TNT-PABT complex. In addition, the surface of AgNPs and silicon substrate-induced plasmon resonances also contribute the total SERS enhancement. For quantitative evaluation, the as-prepared chip features ultrahigh sensitivity [limit of detection is down to ∼1 pM (∼45.4 fg/cm 2 )] and adaptable reproducibility (relative standard deviation is less than 15%) in the detection of TNT standard solutions. More importantly, the developed chip can couple well with a hand-held Raman spectroscopic device using 785 nm excitation, suitable for qualitative analysis of trace TNT even at ∼10 -8 M level from environmental samples including lake water, soil, envelope, and liquor with a short data acquisition time (∼1 min). Furthermore, TNT vapors diffusing from TNT residues (∼10 -6 M) can be detected by using such a portable device, indicating its feasibility in determination of hidden samples.

  4. Effects of PCB Pad Metal Finishes on the Cu-Pillar/Sn-Ag Micro Bump Joint Reliability of Chip-on-Board (COB) Assembly

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoon; Lee, Seyong; Shin, Ji-won; Paik, Kyung-Wook

    2016-06-01

    While solder bumps have been used as the bump structure to form the interconnection during the last few decades, the continuing scaling down of devices has led to a change in the bump structure to Cu-pillar/Sn-Ag micro-bumps. Cu-pillar/Sn-Ag micro-bump interconnections differ from conventional solder bump interconnections in terms of their assembly processing and reliability. A thermo-compression bonding method with pre-applied b-stage non-conductive films has been adopted to form solder joints between Cu pillar/Sn-Ag micro bumps and printed circuit board vehicles, using various pad metal finishes. As a result, various interfacial inter-metallic compounds (IMCs) reactions and stress concentrations occur at the Cu pillar/Sn-Ag micro bumps joints. Therefore, it is necessary to investigate the influence of pad metal finishes on the structural reliability of fine pitch Cu pillar/Sn-Ag micro bumps flip chip packaging. In this study, four different pad surface finishes (Thin Ni ENEPIG, OSP, ENEPIG, ENIG) were evaluated in terms of their interconnection reliability by thermal cycle (T/C) test up to 2000 cycles at temperatures ranging from -55°C to 125°C and high-temperature storage test up to 1000 h at 150°C. The contact resistances of the Cu pillar/Sn-Ag micro bump showed significant differences after the T/C reliability test in the following order: thin Ni ENEPIG > OSP > ENEPIG where the thin Ni ENEPIG pad metal finish provided the best Cu pillar/Sn-Ag micro bump interconnection in terms of bump joint reliability. Various IMCs formed between the bump joint areas can account for the main failure mechanism.

  5. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  6. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  7. Simulation of cooling efficiency via miniaturised channels in multilayer LTCC for power electronics

    NASA Astrophysics Data System (ADS)

    Pietrikova, Alena; Girasek, Tomas; Lukacs, Peter; Welker, Tilo; Müller, Jens

    2017-03-01

    The aim of this paper is detailed investigation of thermal resistance, flow analysis and distribution of coolant as well as thermal distribution inside multilayer LTCC substrates with embedded channels for power electronic devices by simulation software. For this reason four various structures of internal channels in the multilayer LTCC substrates were designed and simulated. The impact of the volume flow, structures of channels, and power loss of chip was simulated, calculated and analyzed by using the simulation software Mentor Graphics FloEFDTM. The structure, size and location of channels have the significant impact on thermal resistance, pressure of coolant as well as the effectivity of cooling power components (chips) that can be placed on the top of LTCC substrate. The main contribution of this paper is thermal analyze, optimization and impact of 4 various cooling channels embedded in LTCC multilayer structure. Paper investigate, the effect of volume flow in cooling channels for achieving the least thermal resistance of LTCC substrate that is loaded by power thermal chips. Paper shows on the impact of the first chips thermal load on the second chip as well as. This possible new technology could ensure in the case of practical realization effective cooling and increasing reliability of high power modules.

  8. Flip-chip fabrication of integrated micromirror arrays using a novel latching off-chip hinge mechanism

    NASA Astrophysics Data System (ADS)

    Michalicek, M. Adrian; Bright, Victor M.

    2001-10-01

    This paper presents the design, fabrication, modeling, and testing of various arrays of cantilever micromirror devices integrated atop CMOS control electronics. The upper layers of the arrays are prefabricated in the MUMPs process and then flip-chip transferred to CMOS receiving modules using a novel latching off-chip hinge mechanism. This mechanism allows the micromirror arrays to be released, rotated off the edge of the host module and then bonded to the receiving module using a standard probe station. The hinge mechanism supports the arrays by tethers that are severed to free the arrays once bonded. The resulting devices are inherently planarized since the bottom of the first releasable MUMPs layer becomes the surface of the integrated mirror. The working devices are formed by mirror surfaces bonded to address electrodes fabricated above static memory cells on the CMOS module. These arrays demonstrate highly desirable features such as compatible address potentials, less than 2 nm of RMS roughness, approximately 1 micrometers of lateral position accuracy and the unique ability to metallize reflective surfaces without masking. Ultimately, the off-chip hinge mechanism enables very low-cost, simple, reliable, repeatable and accurate assembly of advanced MEMS and integrated microsystems without specialized equipment or complex procedures.

  9. The Extended Core Coax: A novel nanoarchitecture for lab-on-a-chip electrochemical diagnostics

    NASA Astrophysics Data System (ADS)

    Valera, Amy E.; D'Imperio, Luke; Burns, Michael J.; Naughton, Michael J.; Chiles, Thomas C.

    We report a novel nanoarchitecture, the Extended Core Coax (ECC) that has applicability for the detection of biomarkers in lab-on-a-chip diagnostic devices. ECC is capable of providing accessible, highly sensitive, and specific disease diagnosis at point-of-care. The architecture represents a vertically oriented nanocoax comprised of a gold inner metal core that extends 200nm above a chrome outer metal shield, separated by a dielectric annulus. Each ECC chip contains 7 discrete sensing arrays, 0.49 mm2 in size, containing 35,000 nanoscale coaxes wired in parallel. Previous non-extended nanocoaxial architectures have demonstrated a limit of detection (LOD) of 2 ng/mL of cholera toxin using an off-chip setup. This sensitivity compares favorably to the standard optical ELISA used in clinical settings. The ECC matches this LOD, and additionally offers the benefit of specific and reliable biofunctionalization on the extended gold core. Thus, the ECC is an attractive candidate for development as a full lab-on-a-chip biosensor for detection of infectious disease biomarkers, such as cholera toxin, through tethering of biomarker recognition proteins, such as antibodies, directly on the device. Support from the National Institutes of Health (National Cancer Institute award No. CA137681 and National Institute of Allergy and Infectious Diseases award No. AI100216).

  10. Ultra-dense magnetoresistive mass memory

    NASA Technical Reports Server (NTRS)

    Daughton, J. M.; Sinclair, R.; Dupuis, T.; Brown, J.

    1992-01-01

    This report details the progress and accomplishments of Nonvolatile Electronics (NVE), Inc., on the design of the wafer scale MRAM mass memory system during the fifth quarter of the project. NVE has made significant progress this quarter on the one megabit design in several different areas. A test chip, which will verify a working GMR bit with the dimensions required by the 1 Meg chip, has been designed, laid out, and is currently being processed in the NVE labs. This test chip will allow electrical specifications, tolerances, and processing issues to be finalized before construction of the actual chip, thus providing a greater assurance of success of the final 1 Meg design. A model has been developed to accurately simulate the parasitic effects of unselected sense lines. This model gives NVE the ability to perform accurate simulations of the array electronic and test different design concepts. Much of the circuit design for the 1 Meg chip has been completed and simulated and these designs are included. Progress has been made in the wafer scale design area to verify the reliable operation of the 16 K macrocell. This is currently being accomplished with the design and construction of two stand alone test systems which will perform life tests and gather data on reliabiliy and wearout mechanisms for analysis.

  11. Reusable and Mediator-Free Cholesterol Biosensor Based on Cholesterol Oxidase Immobilized onto TGA-SAM Modified Smart Bio-Chips

    PubMed Central

    Rahman, Mohammed M.

    2014-01-01

    A reusable and mediator-free cholesterol biosensor based on cholesterol oxidase (ChOx) was fabricated based on self-assembled monolayer (SAM) of thioglycolic acid (TGA) (covalent enzyme immobilization by dropping method) using bio-chips. Cholesterol was detected with modified bio-chip (Gold/Thioglycolic-acid/Cholesterol-oxidase i.e., Au/TGA/ChOx) by reliable cyclic voltammetric (CV) technique at room conditions. The Au/TGA/ChOx modified bio-chip sensor demonstrates good linearity (1.0 nM to 1.0 mM; R = 0.9935), low-detection limit (∼0.42 nM, SNR∼3), and higher sensitivity (∼74.3 µAµM−1cm−2), lowest-small sample volume (50.0 μL), good stability, and reproducibility. To the best of our knowledge, this is the first statement with a very high sensitivity, low-detection limit, and low-sample volumes are required for cholesterol biosensor using Au/TGA/ChOx-chips assembly. The result of this facile approach was investigated for the biomedical applications for real samples at room conditions with significant assembly (Au/TGA/ChOx) towards the development of selected cholesterol biosensors, which can offer analytical access to a large group of enzymes for wide range of biomedical applications in health-care fields. PMID:24949733

  12. On-chip supercapacitors with ultrahigh volumetric performance based on electrochemically co-deposited CuO/polypyrrole nanosheet arrays

    NASA Astrophysics Data System (ADS)

    Qian, Tao; Zhou, Jinqiu; Xu, Na; Yang, Tingzhou; Shen, Xiaowei; Liu, Xuejun; Wu, Shishan; Yan, Chenglin

    2015-10-01

    We introduce a new method for fabricating unique on-chip supercapacitors based on CuO/polypyrrole core/shell nanosheet arrays by means of direct electrochemical co-deposition on interdigital-like electrodes. The prepared all-solid-state device demonstrates exceptionally high specific capacitance of 1275.5 F cm-3 (˜40 times larger than that of CuO-only supercapacitors) and high-energy-density of 28.35 mWh cm-3, which are both significantly greater than other solid-state supercapacitors. More importantly, the device maintains approximately 100% capacity retention at 2.5 A cm-3 after 3000 cycles. The in situ co-deposition of CuO/polypyrrole nanosheets on interdigital substrate enables effective charge transport, electrode fabrication integrity, and device integration. Because of their high energy, power density, and stable cycling stability, these newly developed on-chip supercapacitors permit fast, reliable applications in portable and miniaturized electronic devices.

  13. Microcontroller-based real-time QRS detection.

    PubMed

    Sun, Y; Suppappola, S; Wrublewski, T A

    1992-01-01

    The authors describe the design of a system for real-time detection of QRS complexes in the electrocardiogram based on a single-chip microcontroller (Motorola 68HC811). A systematic analysis of the instrumentation requirements for QRS detection and of the various design techniques is also given. Detection algorithms using different nonlinear transforms for the enhancement of QRS complexes are evaluated by using the ECG database of the American Heart Association. The results show that the nonlinear transform involving multiplication of three adjacent, sign-consistent differences in the time domain gives a good performance and a quick response. When implemented with an appropriate sampling rate, this algorithm is also capable of rejecting pacemaker spikes. The eight-bit single-chip microcontroller provides sufficient throughput and shows a satisfactory performance. Implementation of multiple detection algorithms in the same system improves flexibility and reliability. The low chip count in the design also favors maintainability and cost-effectiveness.

  14. Wide-field high-speed space-division multiplexing optical coherence tomography using an integrated photonic device

    PubMed Central

    Huang, Yongyang; Badar, Mudabbir; Nitkowski, Arthur; Weinroth, Aaron; Tansu, Nelson; Zhou, Chao

    2017-01-01

    Space-division multiplexing optical coherence tomography (SDM-OCT) is a recently developed parallel OCT imaging method in order to achieve multi-fold speed improvement. However, the assembly of fiber optics components used in the first prototype system was labor-intensive and susceptible to errors. Here, we demonstrate a high-speed SDM-OCT system using an integrated photonic chip that can be reliably manufactured with high precisions and low per-unit cost. A three-layer cascade of 1 × 2 splitters was integrated in the photonic chip to split the incident light into 8 parallel imaging channels with ~3.7 mm optical delay in air between each channel. High-speed imaging (~1s/volume) of porcine eyes ex vivo and wide-field imaging (~18.0 × 14.3 mm2) of human fingers in vivo were demonstrated with the chip-based SDM-OCT system. PMID:28856055

  15. Evaluation of a Programmable Voltage-Controlled MEMS Oscillator, Type SiT3701, Over a Wide Temperature Range

    NASA Technical Reports Server (NTRS)

    Patterson, Richard; Hammoud, Ahmad

    2009-01-01

    Semiconductor chips based on MEMS (Micro-Electro-Mechanical Systems) technology, such as sensors, transducers, and actuators, are becoming widely used in today s electronics due to their high performance, low power consumption, tolerance to shock and vibration, and immunity to electro-static discharge. In addition, the MEMS fabrication process allows for the miniaturization of individual chips as well as the integration of various electronic circuits into one module, such as system-on-a-chip. These measures would simplify overall system design, reduce parts count and interface, improve reliability, and reduce cost; and they would meet requirements of systems destined for use in space exploration missions. In this work, the performance of a recently-developed MEMS voltage-controlled oscillator was evaluated under a wide temperature range. Operation of this new commercial-off-the-shelf (COTS) device was also assessed under thermal cycling to address some operational conditions of the space environment

  16. Validation of a Brief Structured Interview: The Children's Interview for Psychiatric Syndromes (ChIPS).

    PubMed

    Young, Matthew E; Bell, Ziv E; Fristad, Mary A

    2016-12-01

    Evidence-based assessment is important in the treatment of childhood psychopathology. While researchers and clinicians frequently use structured diagnostic interviews to ensure reliability, the most commonly used instrument, the Schedule for Affective Disorders and Schizophrenia for School Aged Children (K-SADS) is too long for most clinical applications. The Children's Interview for Psychiatric Syndromes (ChIPS/P-ChIPS) is a highly-structured brief diagnostic interview. The present study compared K-SADS and ChIPS/P-ChIPS diagnoses in an outpatient clinical sample of 50 parent-child pairs aged 7-14. Agreement between most diagnoses was moderate to high between the instruments and with consensus clinical diagnoses. ChIPS was significantly briefer to administer than the K-SADS. Interviewer experience level and participant demographics did not appear to affect agreement. Results provide further evidence for the validity of the ChIPS and support its use in clinical and research settings.

  17. On-chip supercapacitors with ultrahigh volumetric performance based on electrochemically co-deposited CuO/polypyrrole nanosheet arrays.

    PubMed

    Qian, Tao; Zhou, Jinqiu; Xu, Na; Yang, Tingzhou; Shen, Xiaowei; Liu, Xuejun; Wu, Shishan; Yan, Chenglin

    2015-10-23

    We introduce a new method for fabricating unique on-chip supercapacitors based on CuO/polypyrrole core/shell nanosheet arrays by means of direct electrochemical co-deposition on interdigital-like electrodes. The prepared all-solid-state device demonstrates exceptionally high specific capacitance of 1275.5 F cm(-3) (∼40 times larger than that of CuO-only supercapacitors) and high-energy-density of 28.35 mWh cm(-3), which are both significantly greater than other solid-state supercapacitors. More importantly, the device maintains approximately 100% capacity retention at 2.5 A cm(-3) after 3000 cycles. The in situ co-deposition of CuO/polypyrrole nanosheets on interdigital substrate enables effective charge transport, electrode fabrication integrity, and device integration. Because of their high energy, power density, and stable cycling stability, these newly developed on-chip supercapacitors permit fast, reliable applications in portable and miniaturized electronic devices.

  18. Fault Tolerant Characteristics of Artificial Neural Network Electronic Hardware

    NASA Technical Reports Server (NTRS)

    Zee, Frank

    1995-01-01

    The fault tolerant characteristics of analog-VLSI artificial neural network (with 32 neurons and 532 synapses) chips are studied by exposing them to high energy electrons, high energy protons, and gamma ionizing radiations under biased and unbiased conditions. The biased chips became nonfunctional after receiving a cumulative dose of less than 20 krads, while the unbiased chips only started to show degradation with a cumulative dose of over 100 krads. As the total radiation dose increased, all the components demonstrated graceful degradation. The analog sigmoidal function of the neuron became steeper (increase in gain), current leakage from the synapses progressively shifted the sigmoidal curve, and the digital memory of the synapses and the memory addressing circuits began to gradually fail. From these radiation experiments, we can learn how to modify certain designs of the neural network electronic hardware without using radiation-hardening techniques to increase its reliability and fault tolerance.

  19. Tracking Silent Hypersensitivity Reactions to Asparaginase during Leukemia Therapy Using Single-Chip Indirect Plasmonic and Fluorescence Immunosensing.

    PubMed

    Charbonneau, David M; Breault-Turcot, Julien; Sinnett, Daniel; Krajinovic, Maja; Leclerc, Jean-Marie; Masson, Jean-François; Pelletier, Joelle N

    2017-12-22

    Microbial asparaginase is an essential component of chemotherapy for the treatment of childhood acute lymphoblastic leukemia (cALL). Silent hypersensitivity reactions to this microbial enzyme need to be monitored accurately during treatment to avoid adverse effects of the drug and its silent inactivation. Here, we present a dual-response anti-asparaginase sensor that combines indirect SPR and fluorescence on a single chip to perform ELISA-type immunosensing, and correlate measurements with classical ELISA. Analysis of serum samples from children undergoing cALL therapy revealed a clear correlation between single-chip indirect SPR/fluorescence immunosensing and ELISA used in clinical settings (R 2 > 0.9). We also report that the portable SPR/fluorescence system had a better sensitivity than classical ELISA to detect antibodies in clinical samples with low antigenicity. This work demonstrates the reliability of dual sensing for monitoring clinically relevant antibody titers in clinical serum samples.

  20. Ultra-compact 32 × 32 strictly-non-blocking Si-wire optical switch with fan-out LGA interposer.

    PubMed

    Tanizawa, Ken; Suzuki, Keijiro; Toyama, Munehiro; Ohtsuka, Minoru; Yokoyama, Nobuyuki; Matsumaro, Kazuyuki; Seki, Miyoshi; Koshino, Keiji; Sugaya, Toshio; Suda, Satoshi; Cong, Guangwei; Kimura, Toshio; Ikeda, Kazuhiro; Namiki, Shu; Kawashima, Hitoshi

    2015-06-29

    We demonstrate a 32 × 32 path-independent-insertion-loss optical path switch that integrates 1024 thermooptic Mach-Zehnder switches and 961 intersections on a small, 11 × 25 mm2 die. The switch is fabricated on a 300-mm-diameter silicon-on-insulator wafer by a complementary metal-oxide semiconductor-compatible process with advanced ArF immersion lithography. For reliable electrical packaging, the switch chip is flip-chip bonded to a ceramic interposer that arranges the electrodes in a 0.5-mm pitch land grid array. The on-chip loss is measured to be 15.8 ± 1.0 dB, and successful switching is demonstrated for digital-coherent 43-Gb/s QPSK signals. The total crosstalk of the switch is estimated to be less than -20 dB at the center wavelength of 1545 nm. The bandwidth narrowing caused by dimensional errors that arise during fabrication is discussed.

  1. CSP Manufacturing Challenges and Assembly Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2000-01-01

    Although the expression of CSP is widely used by industry from suppliers to users, its implied definition had evolved as the technology has matured. There are "expert definition"- package that is up to 1.5 time die- or "interim definition". CSPs are miniature new packages that industry is starting to implement and there are many unresolved technical issues associated with their implementation. For example, in early 1997, packages with 1 mm pitch and lower were the dominant CSPs, whereas in early 1998 packages with 0.8 mm and lower became the norm for CSPs. Other changes included the use of flip chip die rather than wire bond in CSP. Nonetheless the emerging CSPs are competing with bare die assemblies and are becoming the package of choice for size reduction applications. These packages provide the benefits of small size and performance of the bare die or flip chip, with the advantage of standard die packages. The JPL-led MicrotypeBGA Consortium of enterprises representing government agencies and private companies have jointed together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects. This talk will cover specifically the experience of our consortium on technology implementation challenges, including design and build of both standard and microvia boards, assembly of two types of test vehicles, and the most current environmental thermal cycling test results.

  2. Flexible Chip Scale Package and Interconnect for Implantable MEMS Movable Microelectrodes for the Brain

    PubMed Central

    Jackson, Nathan; Muthuswamy, Jit

    2009-01-01

    We report here a novel approach called MEMS microflex interconnect (MMFI) technology for packaging a new generation of Bio-MEMS devices that involve movable microelectrodes implanted in brain tissue. MMFI addresses the need for (i) operating space for movable parts and (ii) flexible interconnects for mechanical isolation. We fabricated a thin polyimide substrate with embedded bond-pads, vias, and conducting traces for the interconnect with a backside dry etch, so that the flexible substrate can act as a thin-film cap for the MEMS package. A double gold stud bump rivet bonding mechanism was used to form electrical connections to the chip and also to provide a spacing of approximately 15–20 µm for the movable parts. The MMFI approach achieved a chip scale package (CSP) that is lightweight, biocompatible, having flexible interconnects, without an underfill. Reliability tests demonstrated minimal increases of 0.35 mΩ, 0.23 mΩ and 0.15 mΩ in mean contact resistances under high humidity, thermal cycling, and thermal shock conditions respectively. High temperature tests resulted in an increase in resistance of > 90 mΩ when aluminum bond pads were used, but an increase of ~ 4.2 mΩ with gold bond pads. The mean-time-to-failure (MTTF) was estimated to be at least one year under physiological conditions. We conclude that MMFI technology is a feasible and reliable approach for packaging and interconnecting Bio-MEMS devices. PMID:20160981

  3. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    NASA Astrophysics Data System (ADS)

    Harney, Kieran P.

    2005-01-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  4. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    NASA Astrophysics Data System (ADS)

    Harney, Kieran P.

    2004-12-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  5. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  6. High-performance multiprocessor architecture for a 3-D lattice gas model

    NASA Technical Reports Server (NTRS)

    Lee, F.; Flynn, M.; Morf, M.

    1991-01-01

    The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.

  7. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  8. The performance of disk arrays in shared-memory database machines

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  9. Dynamic programming on a shared-memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Edmonds, Phil; Chu, Eleanor; George, Alan

    1993-01-01

    Three new algorithms for solving dynamic programming problems on a shared-memory parallel computer are described. All three algorithms attempt to balance work load, while keeping synchronization cost low. In particular, for a multiprocessor having p processors, an analysis of the best algorithm shows that the arithmetic cost is O(n-cubed/6p) and that the synchronization cost is O(absolute value of log sub C n) if p much less than n, where C = (2p-1)/(2p + 1) and n is the size of the problem. The low synchronization cost is important for machines where synchronization is expensive. Analysis and experiments show that the best algorithm is effective in balancing the work load and producing high efficiency.

  10. Ultra-Sensitive Lab-on-a-Chip Detection of Sudan I in Food using Plasmonics-Enhanced Diatomaceous Thin Film.

    PubMed

    Kong, Xianming; Squire, Kenny; Chong, Xinyuan; Wang, Alan X

    2017-09-01

    Sudan I is a carcinogenic compound containing an azo group that has been illegally utilized as an adulterant in food products to impart a bright red color to foods. In this paper, we develop a facile lab-on-a-chip device for instant, ultra-sensitive detection of Sudan I from real food samples using plasmonics-enhanced diatomaceous thin film, which can simultaneously perform on-chip separation using thin layer chromatography (TLC) and highly specific sensing using surface-enhanced Raman scattering (SERS) spectroscopy. Diatomite is a kind of nature-created photonic crystal biosilica with periodic pores and was used both as the stationary phase of the TLC plate and photonic crystals to enhance the SERS sensitivity. The on-chip chromatography capability of the TLC plate was verified by isolating Sudan I in a mixture solution containing Rhodamine 6G, while SERS sensing was achieved by spraying gold colloidal nanoparticles into the sensing spot. Such plasmonics-enhanced diatomaceous film can effectively detect Sudan I with more than 10 times improvement of the Raman signal intensity than commercial silica gel TLC plates. We applied this lab-on-a-chip device for real food samples and successfully detected Sudan I in chili sauce and chili oil down to 1 ppm, or 0.5 ng/spot. This on-chip TLC-SERS biosensor based on diatomite biosilica can function as a cost-effective, ultra-sensitive, and reliable technology for screening Sudan I and many other illicit ingredients to enhance food safety.

  11. An easy to assemble microfluidic perfusion device with a magnetic clamp

    PubMed Central

    Tkachenko, Eugene; Gutierrez, Edgar; Ginsberg, Mark H.; Groisman, Alex

    2009-01-01

    We have built and characterized a magnetic clamp for reversible sealing of PDMS microfluidic chips against cover glasses with cell cultures and a microfluidic chip for experiments on shear stress response of endothelial cells. The magnetic clamp exerts a reproducible uniform pressure on the microfluidic chip, achieving fast and reliable sealing for liquid pressures up to 40 kPa inside the chip with <10% deformations of microchannels and minimal variations of the substrate shear stress in perfusion flow. The microfluidic chip has 8 test regions with the substrate shear stress varying by a factor of 2 between each region, thus covering a 128-fold range from low venous to arterial. The perfusion is driven by differential pressure, which makes it possible to create pulsatile flows mimicking pulsing in the vasculature. The setup is tested by 15 – 40 hours perfusions over endothelial monolayers with shear stress in the range of 0.07 - 9 dyn/cm2. Excellent cell viability at all shear stresses and alignment of cells along the flow at high shear stresses are repeatedly observed. A scratch wound healing assay under a shear flow is demonstrated and cell migration velocities are measured. Transfection of cells with a fluorescent protein is performed, and migrating fluorescent cells are imaged at a high resolution under shear flow in real time. The magnetic clamp can be closed with minimal mechanical perturbation to cells on the substrate and used with a variety of microfluidic chips for experiments with adherent and non-adherent cells. PMID:19350090

  12. Research of the small satellite data management system

    NASA Astrophysics Data System (ADS)

    Yu, Xiaozhou; Zhou, Fengqi; Zhou, Jun

    2007-11-01

    Small satellite is the integration of light weight, small volume and low launch cost. It is a promising approach to realize the future space mission. A detailed study of the data management system has been carried out, with using new reconfiguration method based on System On Programmable Chip (SOPC). Compared with common structure of satellite, the Central Terminal Unit (CTU), the Remote Terminal Unit (RTU) and Serial Data Bus (SDB) of the data management are all integrated in single chip. Thus the reliability of the satellite is greatly improved. At the same time, the data management system has powerful performance owing to the modern FPGA processing ability.

  13. Validity and Reliability of Perinatal Biomarkers after Storage as Dry Blood Spots on Paper

    PubMed Central

    Mihalopoulos, Nicole L.; Phillips, Terry M.; Slater, Hillarie; Thomson, J. Anne; Varner, Michael W.; Moyer-Mileur, Laurie J.

    2013-01-01

    Ojective To validate use of chip-based immunoaffinity capillary electrophoresis on dry blood spot samples (DBSS) to measure obesity-related cytokines. Methods Chip-based immunoaffinity capillary electrophoresis was used to measure adiponectin, leptin and insulin in serum and DBSS in pregnant women, cord blood, and infant heelstick at birth and 6 weeks. Concordance of measurements was determined with Pearson's correlation. Results We report high concordance between results obtained from serum and DBSS with the exception of cord blood specimens. Conclusions Ease of sample collection and storage makes DBSS an optimal method for use in studies involving neonates and young children. PMID:21735507

  14. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  15. The Software Correlator of the Chinese VLBI Network

    NASA Technical Reports Server (NTRS)

    Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli

    2010-01-01

    The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.

  16. Heterogeneous Integration for Reduced Phase Noise and Improved Reliability of Semiconductor Lasers

    NASA Astrophysics Data System (ADS)

    Srinivasan, Sudharsanan

    Significant savings in cost, power and space are possible in existing optical data transmission networks, sensors and metrology equipment through photonic integration. Photonic integration can be broadly classified into two categories, hybrid and monolithic integration. The former involves assembling multiple single functionality optical devices together into a single package including any optical coupling and/or electronic connections. On the other hand monolithic integration assembles many devices or optical functionalities on a single chip so that all the optical connections are on chip and require no external alignment. This provides a substantial improvement in reliability and simplifies testing. Monolithic integration has been demonstrated on both indium phosphide (InP) and silicon (Si) substrates. Integration on larger 300mm Si substrates can further bring down the cost and has been a major area of research in recent years. Furthermore, with increasing interest from industry, the hybrid silicon platform is emerging as a new technology for integrating various active and passive optical elements on a single chip. This is both in the interest of bringing down manufacturing cost through scaling along with continued improvement in performance and to produce multi-functional photonic integrated circuits (PIC). The goal of this work is twofold. First, we show four laser demonstrations that use the hybrid silicon platform to lower phase noise due to spontaneous emission, based on the following two techniques, viz. confinement factor reduction and negative optical feedback. The first two demonstrations are of mode-locked lasers and the next two are of tunable lasers. Some of the key results include; (a) 14dB white frequency noise reduction of a 20GHz radio-frequency (RF) signal from a harmonically mode-locked long cavity laser with greater than 55dB supermode noise suppression, (b) 8dB white frequency noise reduction from a colliding pulse mode-locked laser by reducing the number of quantum wells and a further 6dB noise reduction using coherent photon seeding from long on-chip coupled cavity, (c) linewidth reduction of a tunable laser down to 160kHz using negative optical feedback from coupled ring resonator mirrors, and (d) linewidth reduction of a widely tunable laser down to 50kHz using on-chip coupled cavity feedback effect. Second, we present the results of a reliability study conducted to investigate the influence of molecular wafer bonding between Si and InP on the lifetime of distributed feedback lasers, a common laser source used in optical communication. No degradation in lasing threshold or slope efficiency was observed after aging the lasers for 5000hrs at 70°C and 2500hrs at 85°C. However, among the three chosen bonding interface layer options, the devices with an interface superlattice layer showed a higher yield for lasers and lower dark current values in the on-chip monitor photodiodes after aging.

  17. A SNP genotyping array for hexaploid oat

    USDA-ARS?s Scientific Manuscript database

    Recognizing a need in cultivated hexaploid oat (Avena sativa L.) for a reliable set of reference SNPs, we have developed a 6K BeadChip design containing 257 Infinium I and 5,486 Infinium II designs corresponding to 5,743 SNPs. Of those, 4,975 SNPs yielded successful assays after array manufacturing...

  18. Use of cermet thin film resistors with nitride passivated metal insulator field effect transistor

    NASA Technical Reports Server (NTRS)

    Brown, G. A.; Harrap, V.

    1971-01-01

    Film deposition of cermet resistors on same chip with metal nitride oxide silicon field effect transistors permits protection of contamination sensitive active devices from contaminants produced in cermet deposition and definition processes. Additional advantages include lower cost, greater reliability, and space savings.

  19. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  20. Chip-package nano-structured copper and nickel interconnections with metallic and polymeric bonding interfaces

    NASA Astrophysics Data System (ADS)

    Aggarwal, Ankur

    With the semiconductor industry racing toward a historic transition, nano chips with less than 45 nm features demand I/Os in excess of 20,000 that support computing speed in terabits per second, with multi-core processors aggregately providing highest bandwidth at lowest power. On the other hand, emerging mixed signal systems are driving the need for 3D packaging with embedded active components and ultra-short interconnections. Decreasing I/O pitch together with low cost, high electrical performance and high reliability are the key technological challenges identified by the 2005 International Technology Roadmap for Semiconductors (ITRS). Being able to provide several fold increase in the chip-to-package vertical interconnect density is essential for garnering the true benefits of nanotechnology that will utilize nano-scale devices. Electrical interconnections are multi-functional materials that must also be able to withstand complex, sustained and cyclic thermo-mechanical loads. In addition, the materials must be environmentally-friendly, corrosion resistant, thermally stable over a long time, and resistant to electro-migration. A major challenge is also to develop economic processes that can be integrated into back end of the wafer foundry, i.e. with wafer level packaging. Device-to-system board interconnections are typically accomplished today with either wire bonding or solders. Both of these are incremental and run into either electrical or mechanical barriers as they are extended to higher density of interconnections. Downscaling traditional solder bump interconnect will not satisfy the thermo-mechanical reliability requirements at very fine pitches of the order of 30 microns and less. Alternate interconnection approaches such as compliant interconnects typically require lengthy connections and are therefore limited in terms of electrical properties, although expected to meet the mechanical requirements. A novel chip-package interconnection technology is developed to address the IC packaging requirements beyond the ITRS projections and to introduce innovative design and fabrication concepts that will further advance the performance of the chip, the package, and the system board. The nano-structured interconnect technology simultaneously packages all the ICs intact in wafer form with quantum jump in the number of interconnections with the lowest electrical parasitics. The intrinsic properties of nano materials also enable several orders of magnitude higher interconnect densities with the best mechanical properties for the highest reliability and yet provide higher current and heat transfer densities. Nano-structured interconnects provides the ability to assemble the packaged parts on the system board without the use of underfill materials and to enable advanced analog/digital testing, reliability testing, and burn-in at wafer level. This thesis investigates the electrical and mechanical performance of nanostructured interconnections through modeling and test vehicle fabrication. The analytical models evaluate the performance improvements over solder and compliant interconnections. Test vehicles with nano-interconnections were fabricated using low cost electro-deposition techniques and assembled with various bonding interfaces. Interconnections were fabricated at 200 micron pitch to compare with the existing solder joints and at 50 micron pitch to demonstrate fabrication processes at fine pitches. Experimental and modeling results show that the proposed nano-interconnections could enhance the reliability and potentially meet all the system performance requirements for the emerging micro/nano-systems.

  1. The clinical performance evaluation of novel protein chips for eleven biomarkers detection and the diagnostic model study.

    PubMed

    Luo, Yuan; Zhu, Xu; Zhang, Pengjun; Shen, Qian; Wang, Zi; Wen, Xinyu; Wang, Ling; Gao, Jing; Dong, Jin; Yang, Caie; Wu, Tangming; Zhu, Zheng; Tian, Yaping

    2015-01-01

    We aimed to develop and validate two novel protein chips, which are based on microarray chemiluminescence immunoassay and can simultaneously detected 11 biomarkers, and then to evaluate their clinical diagnostic value by comparing with the traditional methods. Protein chips were evaluated for limit of detection, specificity, common interferences, linearity, precision and accuracy. 11 biomarkers were simultaneously detected by traditional methods and protein chips in 3683 samples, which included 1723 cancer patients, 1798 benign diseases patients and 162 healthy controls. After assay validation, protein chips demonstrated high sensitivity, high specificity, good linearity, low imprecision and were free of common interferences. Compared with the traditional methods, protein chips have good correlation in the detection of all the 13 kinds of biomarkers (r≥0.935, P<0.001). For specific cancer detection, there were no statistically significant differences between the traditional method and novel protein chips, except that male protein chip showed significantly better diagnostic value on NSE detection (P=0.004) but significantly worse value on pro-GRP detection (P=0.012), female chip showed significantly better diagnostic value on pro-GRP detection (P=0.005). Furthermore, both male and female multivariate diagnostic models had significantly better diagnostic value than single detection of PGI, PG II, pro-GRP, NSE and CA125 (P<0.05). In addition, male models had significantly better diagnostic value than single CA199 and free-PSA (P<0.05), while female models observed significantly better diagnostic value than single CA724 and β-HCG (P<0.05). For total disease or cancer detection, the AUC of multivariate logistic regression for the male and female disease detection was 0.981 (95% CI: 0.975-0.987) and 0.836 (95% CI: 0.798-0.874), respectively. While, that for total cancer detection was 0.691 (95% CI: 0.666-0.717) and 0.753 (95% CI: 0.731-0.775), respectively. The new designed protein chips are simple, multiplex and reliable clinical assays and the multi-parameter diagnostic models based on them could significantly improve their clinical performance.

  2. The clinical performance evaluation of novel protein chips for eleven biomarkers detection and the diagnostic model study

    PubMed Central

    Luo, Yuan; Zhu, Xu; Zhang, Pengjun; Shen, Qian; Wang, Zi; Wen, Xinyu; Wang, Ling; Gao, Jing; Dong, Jin; Yang, Caie; Wu, Tangming; Zhu, Zheng; Tian, Yaping

    2015-01-01

    We aimed to develop and validate two novel protein chips, which are based on microarray chemiluminescence immunoassay and can simultaneously detected 11 biomarkers, and then to evaluate their clinical diagnostic value by comparing with the traditional methods. Protein chips were evaluated for limit of detection, specificity, common interferences, linearity, precision and accuracy. 11 biomarkers were simultaneously detected by traditional methods and protein chips in 3683 samples, which included 1723 cancer patients, 1798 benign diseases patients and 162 healthy controls. After assay validation, protein chips demonstrated high sensitivity, high specificity, good linearity, low imprecision and were free of common interferences. Compared with the traditional methods, protein chips have good correlation in the detection of all the 13 kinds of biomarkers (r≥0.935, P<0.001). For specific cancer detection, there were no statistically significant differences between the traditional method and novel protein chips, except that male protein chip showed significantly better diagnostic value on NSE detection (P=0.004) but significantly worse value on pro-GRP detection (P=0.012), female chip showed significantly better diagnostic value on pro-GRP detection (P=0.005). Furthermore, both male and female multivariate diagnostic models had significantly better diagnostic value than single detection of PGI, PG II, pro-GRP, NSE and CA125 (P<0.05). In addition, male models had significantly better diagnostic value than single CA199 and free-PSA (P<0.05), while female models observed significantly better diagnostic value than single CA724 and β-HCG (P<0.05). For total disease or cancer detection, the AUC of multivariate logistic regression for the male and female disease detection was 0.981 (95% CI: 0.975-0.987) and 0.836 (95% CI: 0.798-0.874), respectively. While, that for total cancer detection was 0.691 (95% CI: 0.666-0.717) and 0.753 (95% CI: 0.731-0.775), respectively. The new designed protein chips are simple, multiplex and reliable clinical assays and the multi-parameter diagnostic models based on them could significantly improve their clinical performance. PMID:26884957

  3. FibroChip, a Functional DNA Microarray to Monitor Cellulolytic and Hemicellulolytic Activities of Rumen Microbiota

    PubMed Central

    Comtet-Marre, Sophie; Chaucheyras-Durand, Frédérique; Bouzid, Ourdia; Mosoni, Pascale; Bayat, Ali R.; Peyret, Pierre; Forano, Evelyne

    2018-01-01

    Ruminants fulfill their energy needs for growth primarily through microbial breakdown of plant biomass in the rumen. Several biotic and abiotic factors influence the efficiency of fiber degradation, which can ultimately impact animal productivity and health. To provide more insight into mechanisms involved in the modulation of fibrolytic activity, a functional DNA microarray targeting genes encoding key enzymes involved in cellulose and hemicellulose degradation by rumen microbiota was designed. Eight carbohydrate-active enzyme (CAZyme) families (GH5, GH9, GH10, GH11, GH43, GH48, CE1, and CE6) were selected which represented 392 genes from bacteria, protozoa, and fungi. The DNA microarray, designated as FibroChip, was validated using targets of increasing complexity and demonstrated sensitivity and specificity. In addition, FibroChip was evaluated for its explorative and semi-quantitative potential. Differential expression of CAZyme genes was evidenced in the rumen bacterium Fibrobacter succinogenes S85 grown on wheat straw or cellobiose. FibroChip was used to identify the expressed CAZyme genes from the targeted families in the rumen of a cow fed a mixed diet based on grass silage. Among expressed genes, those encoding GH43, GH5, and GH10 families were the most represented. Most of the F. succinogenes genes detected by the FibroChip were also detected following RNA-seq analysis of RNA transcripts obtained from the rumen fluid sample. Use of the FibroChip also indicated that transcripts of fiber degrading enzymes derived from eukaryotes (protozoa and anaerobic fungi) represented a significant proportion of the total microbial mRNA pool. FibroChip represents a reliable and high-throughput tool that enables researchers to monitor active members of fiber degradation in the rumen. PMID:29487591

  4. FibroChip, a Functional DNA Microarray to Monitor Cellulolytic and Hemicellulolytic Activities of Rumen Microbiota.

    PubMed

    Comtet-Marre, Sophie; Chaucheyras-Durand, Frédérique; Bouzid, Ourdia; Mosoni, Pascale; Bayat, Ali R; Peyret, Pierre; Forano, Evelyne

    2018-01-01

    Ruminants fulfill their energy needs for growth primarily through microbial breakdown of plant biomass in the rumen. Several biotic and abiotic factors influence the efficiency of fiber degradation, which can ultimately impact animal productivity and health. To provide more insight into mechanisms involved in the modulation of fibrolytic activity, a functional DNA microarray targeting genes encoding key enzymes involved in cellulose and hemicellulose degradation by rumen microbiota was designed. Eight carbohydrate-active enzyme (CAZyme) families (GH5, GH9, GH10, GH11, GH43, GH48, CE1, and CE6) were selected which represented 392 genes from bacteria, protozoa, and fungi. The DNA microarray, designated as FibroChip, was validated using targets of increasing complexity and demonstrated sensitivity and specificity. In addition, FibroChip was evaluated for its explorative and semi-quantitative potential. Differential expression of CAZyme genes was evidenced in the rumen bacterium Fibrobacter succinogenes S85 grown on wheat straw or cellobiose. FibroChip was used to identify the expressed CAZyme genes from the targeted families in the rumen of a cow fed a mixed diet based on grass silage. Among expressed genes, those encoding GH43, GH5, and GH10 families were the most represented. Most of the F. succinogenes genes detected by the FibroChip were also detected following RNA-seq analysis of RNA transcripts obtained from the rumen fluid sample. Use of the FibroChip also indicated that transcripts of fiber degrading enzymes derived from eukaryotes (protozoa and anaerobic fungi) represented a significant proportion of the total microbial mRNA pool. FibroChip represents a reliable and high-throughput tool that enables researchers to monitor active members of fiber degradation in the rumen.

  5. A miniature electronic nose system based on an MWNT-polymer microsensor array and a low-power signal-processing chip.

    PubMed

    Chiu, Shih-Wen; Wu, Hsiang-Chiu; Chou, Ting-I; Chen, Hsin; Tang, Kea-Tiong

    2014-06-01

    This article introduces a power-efficient, miniature electronic nose (e-nose) system. The e-nose system primarily comprises two self-developed chips, a multiple-walled carbon nanotube (MWNT)-polymer based microsensor array, and a low-power signal-processing chip. The microsensor array was fabricated on a silicon wafer by using standard photolithography technology. The microsensor array comprised eight interdigitated electrodes surrounded by SU-8 "walls," which restrained the material-solvent liquid in a defined area of 650 × 760 μm(2). To achieve a reliable sensor-manufacturing process, we used a two-layer deposition method, coating the MWNTs and polymer film as the first and second layers, respectively. The low-power signal-processing chip included array data acquisition circuits and a signal-processing core. The MWNT-polymer microsensor array can directly connect with array data acquisition circuits, which comprise sensor interface circuitry and an analog-to-digital converter; the signal-processing core consists of memory and a microprocessor. The core executes the program, classifying the odor data received from the array data acquisition circuits. The low-power signal-processing chip was designed and fabricated using the Taiwan Semiconductor Manufacturing Company 0.18-μm 1P6M standard complementary metal oxide semiconductor process. The chip consumes only 1.05 mW of power at supply voltages of 1 and 1.8 V for the array data acquisition circuits and the signal-processing core, respectively. The miniature e-nose system, which used a microsensor array, a low-power signal-processing chip, and an embedded k-nearest-neighbor-based pattern recognition algorithm, was developed as a prototype that successfully recognized the complex odors of tincture, sorghum wine, sake, whisky, and vodka.

  6. Multiprocessor Neural Network in Healthcare.

    PubMed

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc.

  7. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.

    1995-01-01

    High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.

  8. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  9. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  10. Design and fabrication of a micron scale free-standing specimen for uniaxial micro-tensile tests

    NASA Astrophysics Data System (ADS)

    Tang, Jun; Wang, Hong; Li, Shi Chen; Liu, Rui; Mao, Sheng Ping; Li, Xue Ping; Zhang, Cong Chun; Ding, Guifu

    2009-10-01

    This paper presents a novel design and fabrication of test chips with a nickel free-standing specimen for the micro uniaxial tensile test. To fabricate test chips on the quartz substrate significantly reduces the fabrication time, minimizes the number of steps and eliminates the effect of the wet anisotropic etching process on mechanical properties. The test chip can be gripped tightly to the test machine and aligned accurately in the pulling direction; furthermore, the approximately straight design of the specimen rather than the traditional dog-bone structure enables the strain be directly measured by a displacement sensor. Both finite-element method (FEM) analysis and experimental results indicate the reliability of the new design. The test chip can also be extended to other materials. The experimental measured Young's modulus of a thin nickel film and the ultimate tensile strength are approximately 94.5 Gpa and 1.76 Gpa, respectively. The results were substantially supported by the experiment on larger gauge specimens by a commercial dynamic mechanical analysis (DMA) instrument. These specimens were electroplated under the same conditions. The low Young's modulus and the high ultimate tensile strength might be explained by the fine grain in the electroplated structure.

  11. New Failure Mode of Flip-Chip Solder Joints Related to the Metallization of an Organic Substrate

    NASA Astrophysics Data System (ADS)

    Jang, J. W.; Yoo, S. J.; Hwang, H. I.; Yuk, S. Y.; Kim, C. K.; Kim, S. J.; Han, J. S.; An, S. H.

    2015-10-01

    We report a new failure phenomenon during flip-chip die attach. After reflow, flip-chip bumps were separated between the Al and Ti layers on the Si die side. This was mainly observed at the Si die corner. Transmission electron microscopy images revealed corrosion of the Al layer at the edge of the solder bump metallization. The corrosion at the metallization edge exhibited a notch shape with high stress concentration factor. The organic substrate had Cu metallization with an organic solderable preservative (OSP) coating layer, where a small amount of Cl ions were detected. A solder bump separation mechanism is suggested based on the reaction between Al and Cl, related to the flow of soldering flux. During reflow, the flux will dissolve the Cl-containing OSP layer and flow up to the Al layer on the Si die side. Then, the Cl-dissolved flux will actively react with Al, forming AlCl3. During cooling, solder bumps at the Si die corner will separate through the location of Al corrosion. This demonstrated that the chemistry of the substrate metallization can affect the thermomechanical reliability of flip-chip solder joints.

  12. Chip-based molecularly imprinted monolithic capillary array columns coated GO/SiO2 for selective extraction and sensitive determination of rhodamine B in chili powder.

    PubMed

    Zhai, Haiyun; Huang, Lu; Chen, Zuanguang; Su, Zihao; Yuan, Kaisong; Liang, Guohuan; Pan, Yufang

    2017-01-01

    A novel solid-phase extraction chip embedded with array columns of molecularly imprinted polymer-coated silanized graphene oxide (GO/SiO2-MISPE) was established to detect trace rhodamine B (RB) in chili powder. GO/SiO2-MISPE monolithic columns for RB detection were prepared by optimizing the supporting substrate, template, and polymerizing monomer under mild water bath conditions. Adsorption capacity and specificity, which are critical properties for the application of the GO/SiO2-MISPE monolithic column, were investigated. GO/SiO2-MIP was examined by scanning electron microscopy (SEM) and Fourier transform-infrared spectroscopy. The recovery and the intraday and interday relative standard deviations for RB ranged from 83.7% to 88.4% and 2.5% to 4.0% and the enrichment factors were higher than 110-fold. The chip-based array columns effectively eliminated impurities in chili powder, indicating that the chip-based GO/SiO2-MISPE method was reliable for RB detection in food samples using high-performance liquid chromatography. Accordingly, this method has direct applications for monitoring potentially harmful dyes in processed food. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. About Small Streams and Shiny Rocks: Macromolecular Crystal Growth in Microfluidics

    NASA Technical Reports Server (NTRS)

    vanderWoerd, Mark; Ferree, Darren; Spearing, Scott; Monaco, Lisa; Molho, Josh; Spaid, Michael; Brasseur, Mike; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    We are developing a novel technique with which we have grown diffraction quality protein crystals in very small volumes, utilizing chip-based, microfluidic ("LabChip") technology. With this technology volumes smaller than achievable with any laboratory pipette can be dispensed with high accuracy. We have performed a feasibility study in which we crystallized several proteins with the aid of a LabChip device. The protein crystals are of excellent quality as shown by X-ray diffraction. The advantages of this new technology include improved accuracy of dispensing for small volumes, complete mixing of solution constituents without bubble formation, highly repeatable recipe and growth condition replication, and easy automation of the method. We have designed a first LabChip device specifically for protein crystallization in batch mode and can reliably dispense and mix from a range of solution constituents. We are currently testing this design. Upon completion additional crystallization techniques, such as vapor diffusion and liquid-liquid diffusion will be accommodated. Macromolecular crystallization using microfluidic technology is envisioned as a fully automated system, which will use the 'tele-science' concept of remote operation and will be developed into a research facility aboard the International Space Station.

  14. A Single Chip Automotive Control LSI Using SOI Bipolar Complimentary MOS Double-Diffused MOS

    NASA Astrophysics Data System (ADS)

    Kawamoto, Kazunori; Mizuno, Shoji; Abe, Hirofumi; Higuchi, Yasushi; Ishihara, Hideaki; Fukumoto, Harutsugu; Watanabe, Takamoto; Fujino, Seiji; Shirakawa, Isao

    2001-04-01

    Using the example of an air bag controller, a single chip solution for automotive sub-control systems is investigated, by using a technological combination of improved circuits, bipolar complimentary metal oxide silicon double-diffused metal oxide silicon (BiCDMOS) and thick silicon on insulator (SOI). For circuits, an automotive specific reduced instruction set computer (RISC) center processing unit (CPU), and a novel, all integrated system clock generator, dividing digital phase-locked loop (DDPLL) are proposed. For the device technologies, the authors use SOI-BiCDMOS with trench dielectric-isolation (TD) which enables integration of various devices in an integrated circuit (IC) while avoiding parasitic miss operations by ideal isolation. The structures of the SOI layer and TD, are optimized for obtaining desired device characteristics and high electromagnetic interference (EMI) immunity. While performing all the air bag system functions over a wide range of supply voltage, and ambient temperature, the resulting single chip reduces the electronic parts to about a half of those in the conventional air bags. The combination of single chip oriented circuits and thick SOI-BiCDMOS technologies offered in this work is valuable for size reduction and improved reliability of automotive electronic control units (ECUs).

  15. Reliability and Characteristics of Wafer-Level Chip-Scale Packages under Current Stress

    NASA Astrophysics Data System (ADS)

    Chen, Po-Ying; Kung, Heng-Yu; Lai, Yi-Shao; Hsiung Tsai, Ming; Yeh, Wen-Kuan

    2008-02-01

    In this work, we present a novel approach and method for elucidating the characteristics of wafer-level chip-scale packages (WLCSPs) for electromigration (EM) tests. The die in WLCSP was directly attached to the substrate via a soldered interconnect. The shrinking of the area of the die that is available for power, and the solder bump also shrinks the volume and increases the density of electrons for interconnect efficiency. The bump current density now approaches to 106 A/cm2, at which point the EM becomes a significant reliability issue. As known, the EM failure depends on numerous factors, including the working temperature and the under bump metallization (UBM) thickness. A new interconnection geometry is adopted extensively with moderate success in overcoming larger mismatches between the displacements of components during current and temperature changes. Both environments and testing parameters for WLCSP are increasingly demanded. Although failure mechanisms are considered to have been eliminated or at least made manageable, new package technologies are again challenging its process, integrity and reliability. WLCSP technology was developed to eliminate the need for encapsulation to ensure compatibility with smart-mount technology (SMT). The package has good handing properties but is now facing serious reliability problems. In this work, we investigated the reliability of a WLCSP subjected to different accelerated current stressing conditions at a fixed ambient temperature of 125 °C. A very strong correlation exists between the mean time to failure (MTTF) of the WLCSP test vehicle and the mean current density that is carried by a solder joint. A series of current densities were applied to the WLCSP architecture; Black's power law was employed in a failure mode simulation. Additionally, scanning electron microscopy (SEM) was adopted to determine the differences existing between high- and low-current-density failure modes.

  16. Research of vibration controlling based on programmable logic controller for electrostatic precipitator

    NASA Astrophysics Data System (ADS)

    Zhang, Zisheng; Li, Yanhu; Li, Jiaojiao; Liu, Zhiqiang; Li, Qing

    2013-03-01

    In order to improve the reliability, stability and automation of electrostatic precipitator, circuits of vibration motor for ESP and vibration control ladder diagram program are investigated using Schneider PLC with high performance and programming software of Twidosoft. Operational results show that after adopting PLC, vibration motor can run automatically; compared with traditional control system of vibration based on single-chip microcomputer, it has higher reliability, better stability and higher dust removal rate, when dust emission concentrations <= 50 mg m-3, providing a new method for vibration controlling of ESP.

  17. DNA Microarray for Rapid Detection and Identification of Food and Water Borne Bacteria: From Dry to Wet Lab.

    PubMed

    Ranjbar, Reza; Behzadi, Payam; Najafi, Ali; Roudi, Raheleh

    2017-01-01

    A rapid, accurate, flexible and reliable diagnostic method may significantly decrease the costs of diagnosis and treatment. Designing an appropriate microarray chip reduces noises and probable biases in the final result. The aim of this study was to design and construct a DNA Microarray Chip for a rapid detection and identification of 10 important bacterial agents. In the present survey, 10 unique genomic regions relating to 10 pathogenic bacterial agents including Escherichia coli (E.coli), Shigella boydii, Sh.dysenteriae, Sh.flexneri, Sh.sonnei, Salmonella typhi, S.typhimurium, Brucella sp., Legionella pneumophila, and Vibrio cholera were selected for designing specific long oligo microarray probes. For this reason, the in-silico operations including utilization of the NCBI RefSeq database, Servers of PanSeq and Gview, AlleleID 7.7 and Oligo Analyzer 3.1 was done. On the other hand, the in-vitro part of the study comprised stages of robotic microarray chip probe spotting, bacterial DNAs extraction and DNA labeling, hybridization and microarray chip scanning. In wet lab section, different tools and apparatus such as Nexterion® Slide E, Qarray mini spotter, NimbleGen kit, TrayMix TM S4, and Innoscan 710 were used. A DNA microarray chip including 10 long oligo microarray probes was designed and constructed for detection and identification of 10 pathogenic bacteria. The DNA microarray chip was capable to identify all 10 bacterial agents tested simultaneously. The presence of a professional bioinformatician as a probe designer is needed to design appropriate multifunctional microarray probes to increase the accuracy of the outcomes.

  18. Bubble-free on-chip continuous-flow polymerase chain reaction: concept and application.

    PubMed

    Wu, Wenming; Kang, Kyung-Tae; Lee, Nae Yoon

    2011-06-07

    Bubble formation inside a microscale channel is a significant problem in general microfluidic experiments. The problem becomes especially crucial when performing a polymerase chain reaction (PCR) on a chip which is subject to repetitive temperature changes. In this paper, we propose a bubble-free sample injection scheme applicable for continuous-flow PCR inside a glass/PDMS hybrid microfluidic chip, and attempt to provide a theoretical basis concerning bubble formation and elimination. Highly viscous paraffin oil plugs are employed in both the anterior and posterior ends of a sample plug, completely encapsulating the sample and eliminating possible nucleation sites for bubbles. In this way, internal channel pressure is increased, and vaporization of the sample is prevented, suppressing bubble formation. Use of an oil plug in the posterior end of the sample plug aids in maintaining a stable flow of a sample at a constant rate inside a heated microchannel throughout the entire reaction, as compared to using an air plug. By adopting the proposed sample injection scheme, we demonstrate various practical applications. On-chip continuous-flow PCR is performed employing genomic DNA extracted from a clinical single hair root sample, and its D1S80 locus is successfully amplified. Also, chip reusability is assessed using a plasmid vector. A single chip is used up to 10 times repeatedly without being destroyed, maintaining almost equal intensities of the resulting amplicons after each run, ensuring the reliability and reproducibility of the proposed sample injection scheme. In addition, the use of a commercially-available and highly cost-effective hot plate as a potential candidate for the heating source is investigated.

  19. Flexible Chip Scale Package and Interconnect for Implantable MEMS Movable Microelectrodes for the Brain.

    PubMed

    Jackson, Nathan; Muthuswamy, Jit

    2009-04-01

    We report here a novel approach called MEMS microflex interconnect (MMFI) technology for packaging a new generation of Bio-MEMS devices that involve movable microelectrodes implanted in brain tissue. MMFI addresses the need for (i) operating space for movable parts and (ii) flexible interconnects for mechanical isolation. We fabricated a thin polyimide substrate with embedded bond-pads, vias, and conducting traces for the interconnect with a backside dry etch, so that the flexible substrate can act as a thin-film cap for the MEMS package. A double gold stud bump rivet bonding mechanism was used to form electrical connections to the chip and also to provide a spacing of approximately 15-20 µm for the movable parts. The MMFI approach achieved a chip scale package (CSP) that is lightweight, biocompatible, having flexible interconnects, without an underfill. Reliability tests demonstrated minimal increases of 0.35 mΩ, 0.23 mΩ and 0.15 mΩ in mean contact resistances under high humidity, thermal cycling, and thermal shock conditions respectively. High temperature tests resulted in an increase in resistance of > 90 mΩ when aluminum bond pads were used, but an increase of ~ 4.2 mΩ with gold bond pads. The mean-time-to-failure (MTTF) was estimated to be at least one year under physiological conditions. We conclude that MMFI technology is a feasible and reliable approach for packaging and interconnecting Bio-MEMS devices.

  20. Advanced power analysis methodology targeted to the optimization of a digital pixel readout chip design and its critical serial powering system

    NASA Astrophysics Data System (ADS)

    Marconi, S.; Orfanelli, S.; Karagounis, M.; Hemperek, T.; Christiansen, J.; Placidi, P.

    2017-02-01

    A dedicated power analysis methodology, based on modern digital design tools and integrated with the VEPIX53 simulation framework developed within RD53 collaboration, is being used to guide vital choices for the design and optimization of the next generation ATLAS and CMS pixel chips and their critical serial powering circuit (shunt-LDO). Power consumption is studied at different stages of the design flow under different operating conditions. Significant effort is put into extensive investigations of dynamic power variations in relation with the decoupling seen by the powering network. Shunt-LDO simulations are also reported to prove the reliability at the system level.

  1. Field Evaluations of Systemic Insecticides for Control of Anoplophora glabripennis (Coleoptera: Cerambycidae) in China

    Treesearch

    Therese M. Poland; Robert A. Haack; Toby R. Petrice; Deborah L. Miller; Leah S. Bauer; Ruitong Gao

    2006-01-01

    Anoplophora glabripennis (Motschulsky) (Coleoptera: Cerambycidae), a pest native to China and Korea, was discovered in North America in 1996. Currently, the only reliable strategy available for eradication and control is to cut and chip all infested trees. We evaluated various doses of the systemic insecticides azadirachtin, emamectin benzoate,...

  2. Accelerated life testing effects on CMOS microcircuit characteristics

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Modifications and additions to the present process of making CMOS microcircuits which are designed to provide protective layers on the chip to guard against moisture and contaminants were investigated. High and low temperature Si3N4 protective layers were tested on the CMOS microcircuits and no conclusive improvements in device reliability characteristics were evidenced.

  3. Reliability of measurement and genotype x environment 1 interaction for potato specific gravity

    USDA-ARS?s Scientific Manuscript database

    The dry matter content of potatoes used to make potato chips and French fries strongly influences fry oil absorption and texture of the finished product. Specific gravity (SpGr) is often used to assess the processing quality of potatoes tubers because of its strong correlation with dry matter conten...

  4. FERMI: a digital Front End and Readout MIcrosystem for high resolution calorimetry

    NASA Astrophysics Data System (ADS)

    Alexanian, H.; Appelquist, G.; Bailly, P.; Benetta, R.; Berglund, S.; Bezamat, J.; Blouzon, F.; Bohm, C.; Breveglieri, L.; Brigati, S.; Cattaneo, P. W.; Dadda, L.; David, J.; Engström, M.; Genat, J. F.; Givoletti, M.; Goggi, V. G.; Gong, S.; Grieco, G. M.; Hansen, M.; Hentzell, H.; Holmberg, T.; Höglund, I.; Inkinen, S. J.; Kerek, A.; Landi, C.; Ledortz, O.; Lippi, M.; Lofstedt, B.; Lund-Jensen, B.; Maloberti, F.; Mutz, S.; Nayman, P.; Piuri, V.; Polesello, G.; Sami, M.; Savoy-Navarro, A.; Schwemling, P.; Stefanelli, R.; Sundblad, R.; Svensson, C.; Torelli, G.; Vanuxem, J. P.; Yamdagni, N.; Yuan, J.; Ödmark, A.; Fermi Collaboration

    1995-02-01

    We present a digital solution for the front-end electronics of high resolution calorimeters at future colliders. It is based on analogue signal compression, high speed {A}/{D} converters, a fully programmable pipeline and a digital signal processing (DSP) chain with local intelligence and system supervision. This digital solution is aimed at providing maximal front-end processing power by performing waveform analysis using DSP methods. For the system integration of the multichannel device a multi-chip, silicon-on-silicon multi-chip module (MCM) has been adopted. This solution allows a high level of integration of complex analogue and digital functions, with excellent flexibility in mixing technologies for the different functional blocks. This type of multichip integration provides a high degree of reliability and programmability at both the function and the system level, with the additional possibility of customising the microsystem to detector-specific requirements. For enhanced reliability in high radiation environments, fault tolerance strategies, i.e. redundancy, reconfigurability, majority voting and coding for error detection and correction, are integrated into the design.

  5. Fabrication of Circuit QED Quantum Processors, Part 1: Extensible Footprint for a Superconducting Surface Code

    NASA Astrophysics Data System (ADS)

    Bruno, A.; Michalak, D. J.; Poletto, S.; Clarke, J. S.; Dicarlo, L.

    Large-scale quantum computation hinges on the ability to preserve and process quantum information with higher fidelity by increasing redundancy in a quantum error correction code. We present the realization of a scalable footprint for superconducting surface code based on planar circuit QED. We developed a tileable unit cell for surface code with all I/O routed vertically by means of superconducting through-silicon vias (TSVs). We address some of the challenges encountered during the fabrication and assembly of these chips, such as the quality of etch of the TSV, the uniformity of the ALD TiN coating conformal to the TSV, and the reliability of superconducting indium contact between the chips and PCB. We compare measured performance to a detailed list of specifications required for the realization of quantum fault tolerance. Our demonstration using centimeter-scale chips can accommodate the 50 qubits needed to target the experimental demonstration of small-distance logical qubits. Research funded by Intel Corporation and IARPA.

  6. Open-systems architecture of a standardized command interface chip-set for switching and control of a spacecraft power bus

    NASA Technical Reports Server (NTRS)

    Ruiz, Ian B.; Burke, Gary R.; Lung, Gerald; Whitaker, William D.; Nowicki, Robert M.

    2004-01-01

    The Jet Propulsion Laboratory (JPL) has developed a command interface chip-set that primarily consists of two mixed-signal ASICs'; the Command Interface ASIC (CIA) and Analog Interface ASIC (AIA). The Open-systems architecture employed during the design of this chip-set enables its use as both an intelligent gateway between the system's flight computer and the control, actuation, and activation of the spacecraft's loads, valves, and pyrotechnics respectfully as well as the regulator of the spacecraft power bus. Furthermore, the architecture is highly adaptable and employed fault-tolerant design methods enabling a host of other mission uses including reliable remote data collection. The objective of this design is to both provide a needed flight component that meets the stringent environmental requirements of current deep space missions and to add a new element to a growing library that can be used as a standard building block for future missions to the outer planets.

  7. NASA Workshop on Computational Structural Mechanics 1987, part 1

    NASA Technical Reports Server (NTRS)

    Sykes, Nancy P. (Editor)

    1989-01-01

    Topics in Computational Structural Mechanics (CSM) are reviewed. CSM parallel structural methods, a transputer finite element solver, architectures for multiprocessor computers, and parallel eigenvalue extraction are among the topics discussed.

  8. Genomic prediction using imputed whole-genome sequence data in Holstein Friesian cattle.

    PubMed

    van Binsbergen, Rianne; Calus, Mario P L; Bink, Marco C A M; van Eeuwijk, Fred A; Schrooten, Chris; Veerkamp, Roel F

    2015-09-17

    In contrast to currently used single nucleotide polymorphism (SNP) panels, the use of whole-genome sequence data is expected to enable the direct estimation of the effects of causal mutations on a given trait. This could lead to higher reliabilities of genomic predictions compared to those based on SNP genotypes. Also, at each generation of selection, recombination events between a SNP and a mutation can cause decay in reliability of genomic predictions based on markers rather than on the causal variants. Our objective was to investigate the use of imputed whole-genome sequence genotypes versus high-density SNP genotypes on (the persistency of) the reliability of genomic predictions using real cattle data. Highly accurate phenotypes based on daughter performance and Illumina BovineHD Beadchip genotypes were available for 5503 Holstein Friesian bulls. The BovineHD genotypes (631,428 SNPs) of each bull were used to impute whole-genome sequence genotypes (12,590,056 SNPs) using the Beagle software. Imputation was done using a multi-breed reference panel of 429 sequenced individuals. Genomic estimated breeding values for three traits were predicted using a Bayesian stochastic search variable selection (BSSVS) model and a genome-enabled best linear unbiased prediction model (GBLUP). Reliabilities of predictions were based on 2087 validation bulls, while the other 3416 bulls were used for training. Prediction reliabilities ranged from 0.37 to 0.52. BSSVS performed better than GBLUP in all cases. Reliabilities of genomic predictions were slightly lower with imputed sequence data than with BovineHD chip data. Also, the reliabilities tended to be lower for both sequence data and BovineHD chip data when relationships between training animals were low. No increase in persistency of prediction reliability using imputed sequence data was observed. Compared to BovineHD genotype data, using imputed sequence data for genomic prediction produced no advantage. To investigate the putative advantage of genomic prediction using (imputed) sequence data, a training set with a larger number of individuals that are distantly related to each other and genomic prediction models that incorporate biological information on the SNPs or that apply stricter SNP pre-selection should be considered.

  9. Job-mix modeling and system analysis of an aerospace multiprocessor.

    NASA Technical Reports Server (NTRS)

    Mallach, E. G.

    1972-01-01

    An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.

  10. A macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Wang, Yulun

    1993-01-01

    This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.

  11. Closed-form solutions of performability. [modeling of a degradable buffer/multiprocessor system

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1981-01-01

    Methods which yield closed form performability solutions for continuous valued variables are developed. The models are similar to those employed in performance modeling (i.e., Markovian queueing models) but are extended so as to account for variations in structure due to faults. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. To avoid known difficulties associated with exact transient solutions, an approximate decomposition of the model is employed permitting certain submodels to be solved in equilibrium. These solutions are then incorporated in a model with fewer transient states and by solving the latter, a closed form solution of the system's performability is obtained. In conclusion, some applications of this solution are discussed and illustrated, including an example of design optimization.

  12. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  13. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  14. Development and evaluation of a Fault-Tolerant Multiprocessor (FTMP) computer. Volume 2: FTMP software

    NASA Technical Reports Server (NTRS)

    Lala, J. H.; Smith, T. B., III

    1983-01-01

    The software developed for the Fault-Tolerant Multiprocessor (FTMP) is described. The FTMP executive is a timer-interrupt driven dispatcher that schedules iterative tasks which run at 3.125, 12.5, and 25 Hz. Major tasks which run under the executive include system configuration control, flight control, and display. The flight control task includes autopilot and autoland functions for a jet transport aircraft. System Displays include status displays of all hardware elements (processors, memories, I/O ports, buses), failure log displays showing transient and hard faults, and an autopilot display. All software is in a higher order language (AED, an ALGOL derivative). The executive is a fully distributed general purpose executive which automatically balances the load among available processor triads. Provisions for graceful performance degradation under processing overload are an integral part of the scheduling algorithms.

  15. Reliability and failure modes of implant-supported zirconium-oxide fixed dental prostheses related to veneering techniques

    PubMed Central

    Baldassarri, Marta; Zhang, Yu; Thompson, Van P.; Rekow, Elizabeth D.; Stappert, Christian F. J.

    2011-01-01

    Summary Objectives To compare fatigue failure modes and reliability of hand-veneered and over-pressed implant-supported three-unit zirconium-oxide fixed-dental-prostheses(FDPs). Methods Sixty-four custom-made zirconium-oxide abutments (n=32/group) and thirty-two zirconium-oxide FDP-frameworks were CAD/CAM manufactured. Frameworks were veneered with hand-built up or over-pressed porcelain (n=16/group). Step-stress-accelerated-life-testing (SSALT) was performed in water applying a distributed contact load at the buccal cusp-pontic-area. Post failure examinations were carried out using optical (polarized-reflected-light) and scanning electron microscopy (SEM) to visualize crack propagation and failure modes. Reliability was compared using cumulative-damage step-stress analysis (Alta-7-Pro, Reliasoft). Results Crack propagation was observed in the veneering porcelain during fatigue. The majority of zirconium-oxide FDPs demonstrated porcelain chipping as the dominant failure mode. Nevertheless, fracture of the zirconium-oxide frameworks was also observed. Over-pressed FDPs failed earlier at a mean failure load of 696 ± 149 N relative to hand-veneered at 882 ± 61 N (profile I). Weibull-stress-number of cycles-unreliability-curves were generated. The reliability (2-sided at 90% confidence bounds) for a 400N load at 100K cycles indicated values of 0.84 (0.98-0.24) for the hand-veneered FDPs and 0.50 (0.82-0.09) for their over-pressed counterparts. Conclusions Both zirconium-oxide FDP systems were resistant under accelerated-life-time-testing. Over-pressed specimens were more susceptible to fatigue loading with earlier veneer chipping. PMID:21557985

  16. The effects of additives to SnAgCu alloys on microstructure and drop impact reliability of solder joints

    NASA Astrophysics Data System (ADS)

    Liu, Weiping; Lee, Ning-Cheng

    2007-07-01

    The impact reliability of solder joints in electronic packages is critical to the lifetime of electronic products, especially those portable devices using area array packages such as ball-grid array (BGA) and chip-scale packages (CSP). Currently, SnAgCu (SAC) solders are most widely used for lead-free applications. However, BGA and CSP solder joints using SAC alloys are fragile and prone to premature interfacial failure, especially under shock loading. To further enhance impact reliability, a family of SAC alloys doped with a small amount of additives such as Mn, Ce, Ti, Bi, and Y was developed. The effects of doping elements on drop test performance, creep resistance, and microstructure of the solder joints were investigated, and the solder joints made with the modified alloys exhibited significantly higher impact reliability.

  17. Tailoring microfluidic systems for organ-like cell culture applications using multiphysics simulations

    NASA Astrophysics Data System (ADS)

    Hagmeyer, Britta; Schütte, Julia; Böttger, Jan; Gebhardt, Rolf; Stelzle, Martin

    2013-03-01

    Replacing animal testing with in vitro cocultures of human cells is a long-term goal in pre-clinical drug tests used to gain reliable insight into drug-induced cell toxicity. However, current state-of-the-art 2D or 3D cell cultures aiming at mimicking human organs in vitro still lack organ-like morphology and perfusion and thus organ-like functions. To this end, microfluidic systems enable construction of cell culture devices which can be designed to more closely resemble the smallest functional unit of organs. Multiphysics simulations represent a powerful tool to study the various relevant physical phenomena and their impact on functionality inside microfluidic structures. This is particularly useful as it allows for assessment of system functions already during the design stage prior to actual chip fabrication. In the HepaChip®, dielectrophoretic forces are used to assemble human hepatocytes and human endothelial cells in liver sinusoid-like structures. Numerical simulations of flow distribution, shear stress, electrical fields and heat dissipation inside the cell assembly chambers as well as surface wetting and surface tension effects during filling of the microchannel network supported the design of this human-liver-on-chip microfluidic system for cell culture applications. Based on the device design resulting thereof, a prototype chip was injection-moulded in COP (cyclic olefin polymer). Functional hepatocyte and endothelial cell cocultures were established inside the HepaChip® showing excellent metabolic and secretory performance.

  18. Sequencing of real-world samples using a microfabricated hybrid device having unconstrained straight separation channels.

    PubMed

    Liu, Shaorong; Elkin, Christopher; Kapur, Hitesh

    2003-11-01

    We describe a microfabricated hybrid device that consists of a microfabricated chip containing multiple twin-T injectors attached to an array of capillaries that serve as the separation channels. A new fabrication process was employed to create two differently sized round channels in a chip. Twin-T injectors were formed by the smaller round channels that match the bore of the separation capillaries and separation capillaries were incorporated to the injectors through the larger round channels that match the outer diameter of the capillaries. This allows for a minimum dead volume and provides a robust chip/capillary interface. This hybrid design takes full advantage, such as sample stacking and purification and uniform signal intensity profile, of the unique chip injection scheme for DNA sequencing while employing long straight capillaries for the separations. In essence, the separation channel length is optimized for both speed and resolution since it is unconstrained by chip size. To demonstrate the reliability and practicality of this hybrid device, we sequenced over 1000 real-world samples from Human Chromosome 5 and Ciona intestinalis, prepared at Joint Genome Institute. We achieved average Phred20 read of 675 bases in about 70 min with a success rate of 91%. For the similar type of samples on MegaBACE 1000, the average Phred20 read is about 550-600 bases in 120 min separation time with a success rate of about 80-90%.

  19. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1985-01-01

    Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP.

  20. Study on vacuum packaging reliability of micromachined quartz tuning fork gyroscopes

    NASA Astrophysics Data System (ADS)

    Fan, Maoyan; Zhang, Lifang

    2017-09-01

    Packaging technology of the micromachined quartz tuning fork gyroscopes by vacuum welding has been experimentally studied. The performance of quartz tuning fork is influenced by the encapsulation shell, encapsulation method and fixation of forks. Alloy solder thick film is widely used in the package to avoid the damage of the chip structure by the heat resistance and hot temperature, and this can improve the device performance and welding reliability. The results show that the bases and the lids plated with gold and nickel can significantly improve the airtightness and reliability of the vacuum package. Vacuum packaging is an effective method to reduce the vibration damping, improve the quality factor and further enhance the performance. The threshold can be improved nearly by 10 times.

  1. Special Issue on a Fault Tolerant Network on Chip Architecture

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan

    2010-06-01

    In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.

  2. Study on Temperature Control System Based on SG3525

    NASA Astrophysics Data System (ADS)

    Cheng, Cong; Zhu, Yifeng; Wu, Junfeng

    2017-12-01

    In this paper, it uses the way of dry bath temperature to heat the microfluidic chip directly by the heating plate and the liquid sample in microfluidic chip is heated through thermal conductivity, thus the liquid sample will maintain at target temperature. In order to improve the reliability of the whole machine, a temperature control system based on SG3525 is designed.SG3525 is the core of the system which uses PWM wave produced by itself to drive power tube to heat the heating plate. The bridge circuit consisted of thermistor and PID regulation ensure that the temperature can be controlled at 37 °C with a correctness of ± 0.2 °C and a fluctuation of ± 0.1 °C.

  3. GR712RC- Dual-Core Processor- Product Status

    NASA Astrophysics Data System (ADS)

    Sturesson, Fredrik; Habinc, Sandi; Gaisler, Jiri

    2012-08-01

    The GR712RC System-on-Chip (SoC) is a dual core LEON3FT system suitable for advanced high reliability space avionics. Fault tolerance features from Aeroflex Gaisler’s GRLIB IP library and an implementation using Ramon Chips RadSafe cell library enables superior radiation hardness.The GR712RC device has been designed to provide high processing power by including two LEON3FT 32- bit SPARC V8 processors, each with its own high- performance IEEE754 compliant floating-point-unit and SPARC reference memory management unit.This high processing power is combined with a large number of serial interfaces, ranging from high-speed links for data transfers to low-speed control buses for commanding and status acquisition.

  4. Single chip lidar with discrete beam steering by digital micromirror device.

    PubMed

    Smith, Braden; Hellman, Brandon; Gin, Adley; Espinoza, Alonzo; Takashima, Yuzuru

    2017-06-26

    A novel method of beam steering enables a large field of view and reliable single chip light detection and ranging (lidar) by utilizing a mass-produced digital micromirror device (DMD). Using a short pulsed laser, the micromirrors' rotation is frozen in mid-transition, which forms a programmable blazed grating. The blazed grating efficiently redistributes the light to a single diffraction order, among several. We demonstrated time of flight measurements for five discrete angles using this beam steering method with a nano second 905nm laser and Si avalanche diode. A distance accuracy of < 1 cm over a 1 m distance range, a 48° full field of view, and a measurement rate of 3.34k points/s is demonstrated.

  5. Hybrid macro-micro fluidics system for a chip-based biosensor

    NASA Astrophysics Data System (ADS)

    Tamanaha, C. R.; Whitman, L. J.; Colton, R. J.

    2002-03-01

    We describe the engineering of a hybrid fluidics platform for a chip-based biosensor system that combines high-performance microfluidics components with powerful, yet compact, millimeter-scale pump and valve actuators. The microfluidics system includes channels, valveless diffuser-based pumps, and pinch-valves that are cast into a poly(dimethylsiloxane) (PDMS) membrane and packaged along with the sensor chip into a palm-sized plastic cartridge. The microfluidics are driven by pump and valve actuators contained in an external unit (with a volume ~30 cm3) that interfaces kinematically with the PDMS microelements on the cartridge. The pump actuator is a simple-lever, flexure-hinge displacement amplifier that increases the motion of a piezoelectric stack. The valve actuators are an array of cantilevers operated by shape memory alloy wires. All components can be fabricated without the need for complex lithography or micromachining, and can be used with fluids containing micron-sized particulates. Prototypes have been modeled and tested to ensure the delivery of microliter volumes of fluid and the even dispersion of reagents over the chip sensing elements. With this hybrid approach to the fluidics system, the biochemical assay benefits from the many advantages of microfluidics yet we avoid the complexity and unknown reliability of immature microactuator technologies.

  6. Atom-chip-based interferometry with Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Gebbe, Martina; Abend, Sven; Gersemann, Matthias; Ahlers, Holger; Muentinga, Hauke; Herrmann, Sven; Laemmerzahl, Claus; Ertmer, Wolfgang; Rasel, Ernst M.; Quantus Collaboration

    2017-04-01

    Due to their small spatial and momentum width ultracold Bose-Einstein condensates (BEC) or even delta-kick collimated (DKC) atomic ensembles are very well suited for high precision atom interferometry and measure, for example, inertial forces with high accuracy. We generate such an ensemble in a miniaturized atom-chip setup, where BEC generation and DKC can be performed in a fast and reliable way. Using the chip as a retroreflector we have realized the first atom-chip-based gravimeter. All atom-optical operations including detection take place inside a volume of a one centimeter cube. In order to investigate new geometries we studied symmetric double Bragg diffraction as well as the coherent acceleration of atoms with Bloch oscillations. By combining both techniques we developed a novel relaunch mechanism, which we use to span a fountain geometry within our gravimeter. The relaunch increases the free fall time and, thus, enhances the device's sensitivity. Additionally, we employ these techniques to implement symmetric scalable large momentum beam splitters. This work is supported by the CRC 1128 geo-Q and the DLR with funds provided by the Federal Ministry of Economic Affairs and Energy (BMWi) due to an enactment of the German Bundestag under Grant No. DLR 50WM1552-1557 (QUANTUS-IV-Fallturm).

  7. A novel model for simulating the racing effect in capillary-driven underfill process in flip chip

    NASA Astrophysics Data System (ADS)

    Zhu, Wenhui; Wang, Kanglun; Wang, Yan

    2018-04-01

    Underfill is typically applied in flip chips to increase the reliability of the electronic packagings. In this paper, the evolution of the melt-front shape of the capillary-driven underfill flow is studied through 3D numerical analysis. Two different models, the prevailing surface force model and the capillary model based on the wetted wall boundary condition, are introduced to test their applicability, where level set method is used to track the interface of the two phase flow. The comparison between the simulation results and experimental data indicates that, the surface force model produces better prediction on the melt-front shape, especially in the central area of the flip chip. Nevertheless, the two above models cannot simulate properly the racing effect phenomenon that appears during underfill encapsulation. A novel ‘dynamic pressure boundary condition’ method is proposed based on the validated surface force model. Utilizing this approach, the racing effect phenomenon is simulated with high precision. In addition, a linear relationship is derived from this model between the flow front location at the edge of the flip chip and the filling time. Using the proposed approach, the impact of the underfill-dispensing length on the melt-front shape is also studied.

  8. A Boundary Scan Test Vehicle for Direct Chip Attach Testing

    NASA Technical Reports Server (NTRS)

    Parsons, Heather A.; DAgostino, Saverio; Arakaki, Genji

    2000-01-01

    To facilitate the new faster, better and cheaper spacecraft designs, smaller more mass efficient avionics and instruments are using higher density electronic packaging technologies such as direct chip attach (DCA). For space flight applications, these technologies need to have demonstrated reliability and reasonably well defined fabrication and assembly processes before they will be accepted as baseline designs in new missions. As electronics shrink in size, not only can repair be more difficult, but 49 probing" circuitry can be very risky and it becomes increasingly more difficult to identify the specific source of a problem. To test and monitor these new technologies, the Direct Chip Attach Task, under NASA's Electronic Parts and Packaging Program (NEPP), chose the test methodology of boundary scan testing. The boundary scan methodology was developed for interconnect integrity and functional testing at hard to access electrical nodes. With boundary scan testing, active devices are used and failures can be identified to the specific device and lead. This technology permits the incorporation of "built in test" into almost any circuit and thus gives detailed test access to the highly integrated electronic assemblies. This presentation will describe boundary scan, discuss the development of the boundary scan test vehicle for DCA and current plans for testing of direct chip attach configurations.

  9. An integrated cell culture lab on a chip: modular microdevices for cultivation of mammalian cells and delivery into microfluidic microdroplets.

    PubMed

    Hufnagel, Hansjörg; Huebner, Ansgar; Gülch, Carina; Güse, Katharina; Abell, Chris; Hollfelder, Florian

    2009-06-07

    We present a modular system of microfluidic PDMS devices designed to incorporate the steps necessary for cell biological assays based on mammalian tissue culture 'on-chip'. The methods described herein include the on-chip immobilization and culturing of cells as well as their manipulation by transfection. Assessment of cell viability by flow cytrometry suggests low attrition rates (<3%) and excellent growth properties in the device for up to 7 days for CHO-K1 cells. To demonstrate that key procedures from the repertoire of cell biology are possible in this format, transfection of a reporter gene (encoding green fluorescent protein) was carried out. The modular design enables efficient detachment and recollection of cells and allows assessment of the success of transfection achieved on-chip. The transfection levels (20%) are comparable to standard large scale procedures and more than 500 cells could be transfected. Finally, cells are transferred into microfluidic microdoplets, where in principle a wide range of subsequent assays can be carried out at the single cell level in droplet compartments. The procedures developed for this modular device layout further demonstrate that commonly used methods in cell biology involving mammalian cells can be reliably scaled down to allow single cell investigations in picolitre volumes.

  10. A versatile electrophoresis-based self-test platform.

    PubMed

    Staal, Steven; Ungerer, Mathijn; Floris, Arjan; Ten Brinke, Hans-Willem; Helmhout, Roy; Tellegen, Marian; Janssen, Kjeld; Karstens, Erik; van Arragon, Charlotte; Lenk, Stefan; Staijen, Erik; Bartholomew, Jody; Krabbe, Hans; Movig, Kris; Dubský, Pavel; van den Berg, Albert; Eijkel, Jan

    2015-03-01

    This paper reports on recent research creating a family of electrophoresis-based point of care devices for the determination of a wide range of ionic analytes in various sample matrices. These devices are based on a first version for the point-of-care measurement of Li(+), reported in 2010 by Floris et al. (Lab Chip 2010, 10, 1799-1806). With respect to this device, significant improvements in accuracy, precision, detection limit, and reliability have been obtained especially by the use of multiple injections of one sample on a single chip and integrated data analysis. Internal and external validation by clinical laboratories for the determination of analytes in real patients by a self-test is reported. For Li(+) in blood better precision than the standard clinical determination for Li(+) was achieved. For Na(+) in human urine the method was found to be within the clinical acceptability limits. In a veterinary application, Ca(2+) and Mg(2+) were determined in bovine blood by means of the same chip, but using a different platform. Finally, promising preliminary results are reported with the Medimate platform for the determination of creatinine in whole blood and quantification of both cations and anions through replicate measurements on the same sample with the same chip. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Compact high reliability fiber coupled laser diodes for avionics and related applications

    NASA Astrophysics Data System (ADS)

    Daniel, David R.; Richards, Gordon S.; Janssen, Adrian P.; Turley, Stephen E. H.; Stockton, Thomas E.

    1993-04-01

    This paper describes a newly developed compact high reliability fiber coupled laser diode which is capable of providing enhanced performance under extreme environmental conditions including a very wide operating temperature range. Careful choice of package materials to minimize thermal and mechanical stress, used with proven manufacturing methods, has resulted in highly stable coupling of the optical fiber pigtail to a high performance MOCVD-grown Multi-Quantum Well laser chip. Electro-optical characteristics over temperature are described together with a demonstration of device stability over a range of environmental conditions. Real time device lifetime data is also presented.

  12. The remote infrared remote control system based on LPC1114

    NASA Astrophysics Data System (ADS)

    Ren, Yingjie; Guo, Kai; Xu, Xinni; Sun, Dayu; Wang, Li

    2018-05-01

    In view of the shortcomings such as the short control distance of the traditional air conditioner remote controller on the market nowadays and combining with the current smart home new mode "Cloud+ Terminal" mode, a smart home system based on internet is designed and designed to be fully applied to the simple and reliable features of the LPC1114 chip. The controller is added with temperature control module, timing module and other modules. Through the actual test, it achieved remote control air conditioning, with reliability and stability and brought great convenience to people's lives.

  13. Energy Harvesting Chip and the Chip Based Power Supply Development for a Wireless Sensor Network.

    PubMed

    Lee, Dasheng

    2008-12-02

    In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.

  14. Energy Harvesting Chip and the Chip Based Power Supply Development for a Wireless Sensor Network

    PubMed Central

    Lee, Dasheng

    2008-01-01

    In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program. PMID:27873953

  15. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.

  16. Stochastic airspace simulation tool development

    DOT National Transportation Integrated Search

    2009-10-01

    Modeling and simulation is often used to study : the physical world when observation may not be : practical. The overall goal of a recent and ongoing : simulation tool project has been to provide a : documented, lifecycle-managed, multi-processor : c...

  17. ARTS III/Parallel Processor Design Study

    DOT National Transportation Integrated Search

    1975-04-01

    It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...

  18. Evict on write, a management strategy for a prefetch unit and/or first level cache in a multiprocessor system with speculative execution

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-09-16

    In a multiprocessor system with at least two levels of cache, a speculative thread may run on a core processor in parallel with other threads. When the thread seeks to do a write to main memory, this access is to be written through the first level cache to the second level cache. After the write though, the corresponding line is deleted from the first level cache and/or prefetch unit, so that any further accesses to the same location in main memory have to be retrieved from the second level cache. The second level cache keeps track of multiple versions of data, where more than one speculative thread is running in parallel, while the first level cache does not have any of the versions during speculation. A switch allows choosing between modes of operation of a speculation blind first level cache.

  19. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  20. A robot arm simulation with a shared memory multiprocessor machine

    NASA Technical Reports Server (NTRS)

    Kim, Sung-Soo; Chuang, Li-Ping

    1989-01-01

    A parallel processing scheme for a single chain robot arm is presented for high speed computation on a shared memory multiprocessor. A recursive formulation that is derived from a virtual work form of the d'Alembert equations of motion is utilized for robot arm dynamics. A joint drive system that consists of a motor rotor and gears is included in the arm dynamics model, in order to take into account gyroscopic effects due to the spinning of the rotor. The fine grain parallelism of mechanical and control subsystem models is exploited, based on independent computation associated with bodies, joint drive systems, and controllers. Efficiency and effectiveness of the parallel scheme are demonstrated through simulations of a telerobotic manipulator arm. Two different mechanical subsystem models, i.e., with and without gyroscopic effects, are compared, to show the trade-off between efficiency and accuracy.

  1. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  2. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  3. System architecture for asynchronous multi-processor robotic control system

    NASA Technical Reports Server (NTRS)

    Steele, Robert D.; Long, Mark; Backes, Paul

    1993-01-01

    The architecture for the Modular Telerobot Task Execution System (MOTES) as implemented in the Supervisory Telerobotics (STELER) Laboratory is described. MOTES is the software component of the remote site of a local-remote telerobotic system which is being developed for NASA for space applications, in particular Space Station Freedom applications. The system is being developed to provide control and supervised autonomous control to support both space based operation and ground-remote control with time delay. The local-remote architecture places task planning responsibilities at the local site and task execution responsibilities at the remote site. This separation allows the remote site to be designed to optimize task execution capability within a limited computational environment such as is expected in flight systems. The local site task planning system could be placed on the ground where few computational limitations are expected. MOTES is written in the Ada programming language for a multiprocessor environment.

  4. An Evaluation of Architectural Platforms for Parallel Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1996-01-01

    We study the computational, communication, and scalability characteristics of a computational fluid dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architecture platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and distributed memory multiprocessors with different topologies - the IBM SP and the Cray T3D. We investigate the impact of various networks connecting the cluster of workstations on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  5. Parallelizing Navier-Stokes Computations on a Variety of Architectural Platforms

    NASA Technical Reports Server (NTRS)

    Jayasimha, D. N.; Hayder, M. E.; Pillay, S. K.

    1997-01-01

    We study the computational, communication, and scalability characteristics of a Computational Fluid Dynamics application, which solves the time accurate flow field of a jet using the compressible Navier-Stokes equations, on a variety of parallel architectural platforms. The platforms chosen for this study are a cluster of workstations (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), distributed memory multiprocessors with different topologies-the IBM SP and the Cray T3D. We investigate the impact of various networks, connecting the cluster of workstations, on the performance of the application and the overheads induced by popular message passing libraries used for parallelization. The work also highlights the importance of matching the memory bandwidth to the processor speed for good single processor performance. By studying the performance of an application on a variety of architectures, we are able to point out the strengths and weaknesses of each of the example computing platforms.

  6. A lab-on-chip for malaria diagnosis and surveillance

    PubMed Central

    2014-01-01

    Background Access to timely and accurate diagnostic tests has a significant impact in the management of diseases of global concern such as malaria. While molecular diagnostics satisfy this need effectively in developed countries, barriers in technology, reagent storage, cost and expertise have hampered the introduction of these methods in developing countries. In this study a simple, lab-on-chip PCR diagnostic was created for malaria that overcomes these challenges. Methods The platform consists of a disposable plastic chip and a low-cost, portable, real-time PCR machine. The chip contains a desiccated hydrogel with reagents needed for Plasmodium specific PCR. Chips can be stored at room temperature and used on demand by rehydrating the gel with unprocessed blood, avoiding the need for sample preparation. These chips were run on a custom-built instrument containing a Peltier element for thermal cycling and a laser/camera setup for amplicon detection. Results This diagnostic was capable of detecting all Plasmodium species with a limit of detection for Plasmodium falciparum of 2 parasites/μL of blood. This exceeds the sensitivity of microscopy, the current standard for diagnosis in the field, by ten to fifty-fold. In a blind panel of 188 patient samples from a hyper-endemic region of malaria transmission in Uganda, the diagnostic had high sensitivity (97.4%) and specificity (93.8%) versus conventional real-time PCR. The test also distinguished the two most prevalent malaria species in mixed infections, P. falciparum and Plasmodium vivax. A second blind panel of 38 patient samples was tested on a streamlined instrument with LED-based excitation, achieving a sensitivity of 96.7% and a specificity of 100%. Conclusions These results describe the development of a lab-on-chip PCR diagnostic from initial concept to ready-for-manufacture design. This platform will be useful in front-line malaria diagnosis, elimination programmes, and clinical trials. Furthermore, test chips can be adapted to detect other pathogens for a differential diagnosis in the field. The flexibility, reliability, and robustness of this technology hold much promise for its use as a novel molecular diagnostic platform in developing countries. PMID:24885206

  7. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  8. Design of a fault tolerant airborne digital computer. Volume 1: Architecture

    NASA Technical Reports Server (NTRS)

    Wensley, J. H.; Levitt, K. N.; Green, M. W.; Goldberg, J.; Neumann, P. G.

    1973-01-01

    This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive.

  9. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael

    1994-01-01

    Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.

  10. The Automated Instrumentation and Monitoring System (AIMS) reference manual

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Hontalas, Philip; Listgarten, Sherry

    1993-01-01

    Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).

  11. The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)

    1997-01-01

    Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.

  12. Evaluation of Commercial Automotive-Grade BME Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    Three Ni-BaTiO3 ceramic capacitor lots with the same specification (chip size, capacitance, and rated voltage) and the same reliability level, made by three different manufacturers, were degraded using highly accelerated life stress testing (HALST) with the same temperature and applied voltage conditions. The reliability, as characterized by mean time to failure (MTTF), differed by more than one order of magnitude among the capacitor lots. A theoretical model based on the existence of depletion layers at grain boundaries and the entrapment of oxygen vacancies has been proposed to explain the MTTF difference among these BME capacitors. It is the conclusion of this model that reliability will not be improved simply by increasing the insulation resistance of a BME capacitor. Indeed, Ni-BaTiO3 ceramic capacitors with a smaller degradation rate constant K will always give rise to a longer reliability life.

  13. Evaluation of Commercial Automotive-Grade BME Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    Three Ni-BaTiO3 ceramic capacitor lots with the same specification (chip size, capacitance, and rated voltage) and the same reliability level, made by three different manufacturers, were degraded using highly accelerated life stress testing (HALST) with the same temperature and applied voltage conditions. The reliability, as characterized by mean time to failure (MTTF), differed by more than one order of magnitude among the capacitor lots. A theoretical model based on the existence of depletion layers at grain boundaries and the entrapment of oxygen vacancies has been proposed to explain the MTTF difference among these BME capacitors. It is the conclusion of this model that reliability will not be improved simply by increasing the insulation resistance of a BME capacitor. Indeed, Ni-BaTiO3 ceramic capacitors with a smaller degradation rate constant K will always give rise to a longer reliability life

  14. [Design of blood-pressure parameter auto-acquisition circuit].

    PubMed

    Chen, Y P; Zhang, D L; Bai, H W; Zhang, D A

    2000-02-01

    This paper presents the realization and design of a kind of blood-pressure parameter auto-acquisition circuit. The auto-acquisition of blood-pressure parameter controlled by 89C2051 single chip microcomputer is accomplished by collecting and processing the driving signal of LCD. The circuit that is successfully applied in the home unit of telemedicine system has the simple and reliable properties.

  15. Visual and x-ray inspection characteristics of eutectic and lead free assemblies

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    2003-01-01

    For high reliability applications, visual inspection has been the key technique for most conventional electronic package assemblies. Now, the use of x-ray technique has become an additional inspection requirement for quality control and detection of unique defects due to manufacturing of advanced electronic array packages such as ball grid array (BGAs) and chip scale packages (CSPs).

  16. Impedance feedback control of microfluidic valves for reliable post processing combinatorial droplet injection.

    PubMed

    Axt, Brant; Hsieh, Yi-Fan; Nalayanda, Divya; Wang, Tza-Huei

    2017-09-01

    Droplet microfluidics has found use in many biological assay applications as a means of high-throughput sample processing. One of the challenges of the technology, however, is the ability to control and merge droplets on-demand as they flow through the microdevices. It is in the interest of developing lab-on-chip devices to be able to combinatorically program additive mixing steps for more complex multistep and multiplex assays. Existing technologies to merge droplets are either passive in nature or require highly predictable droplet movement for feedforward control, making them vulnerable to errors during high throughput operation. In this paper, we describe and demonstrate a microfluidic valve-based device for the purpose of combinatorial droplet injection at any stage in a multistep assay. Microfluidic valves are used to robustly control fluid flow, droplet generation, and droplet mixing in the device on-demand, while on-chip impedance measurements taken in real time are used as feedback to accurately time the droplet injections. The presented system is contrasted to attempts without feedback, and is shown to be 100% reliable over long durations. Additionally, content detection and discretionary injections are explored and successfully executed.

  17. Reliability of Sn/Pb and Lead-Free (SnAgCu) Solders of Surface Mounted Miniaturized Passive Components for Extreme Temperature (-185 C to +125 C) Space Missions

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni

    2011-01-01

    Surface mount electronic package test boards have been assembled using tin/lead (Sn/Pb) and lead-free (Pb-free or SnAgCu or SAC305) solders. The soldered surface mount packages include ball grid arrays (BGA), flat packs, various sizes of passive chip components, etc. They have been optically inspected after assembly and subsequently subjected to extreme temperature thermal cycling to assess their reliability or future deep space, long-term, extreme temperature environmental missions. In this study, the employed temperature range (-185oC to +125oC) covers military specifications (-55oC to +100oC), extreme old Martian (-120oC to +115oC), asteroid Nereus (-180oC to +25oC) and JUNO (-150oC to +120oC) environments. The boards were inspected at room temperature and at various intervals as a function of extreme temperature thermal cycling and bake duration. Electrical resistance measurements made at room temperature are reported and the tests to date have shown some change in resistance as a function of extreme temperature thermal cycling and some showed increase in resistance. However, the change in interconnect resistance becomes more noticeable with increasing number of thermal cycles. Further research work will be carried out to understand the reliability of packages under extreme temperature applications (-185oC to +125oC) via continuously monitoring the daisy chain resistance for BGA, Flat-packs, lead less chip packages, etc. This paper will describe the experimental reliability results of miniaturized passive components (01005, 0201, 0402, 0603, 0805, and 1206) assembled using surface mounting processes with tin-lead and lead-free solder alloys under extreme temperature environments.

  18. Reliability of Sn/Pb and lead-free (SnAgCu) solders of surface mounted miniaturized passive components for extreme temperature (-185°C to +125°C) space missions

    NASA Astrophysics Data System (ADS)

    Ramesham, Rajeshuni

    2011-02-01

    Surface mount electronic package test boards have been assembled using tin/lead (Sn/Pb) and lead-free (Pb-free or SnAgCu or SAC305) solders. The soldered surface mount packages include ball grid arrays (BGA), flat packs, various sizes of passive chip components, etc. They have been optically inspected after assembly and subsequently subjected to extreme temperature thermal cycling to assess their reliability for future deep space, long-term, extreme temperature environmental missions. In this study, the employed temperature range (-185°C to +125°C) covers military specifications (-55°C to +100°C), extreme cold Martian (-120°C to +115°C), asteroid Nereus (-180°C to +25°C) and JUNO (-150°C to +120°C) environments. The boards were inspected at room temperature and at various intervals as a function of extreme temperature thermal cycling and bake duration. Electrical resistance measurements made at room temperature are reported and the tests to date have shown some change in resistance as a function of extreme temperature thermal cycling and some showed increase in resistance. However, the change in interconnect resistance becomes more noticeable with increasing number of thermal cycles. Further research work will be carried out to understand the reliability of packages under extreme temperature applications (-185°C to +125°C) via continuously monitoring the daisy chain resistance for BGA, Flat-packs, lead less chip packages, etc. This paper will describe the experimental reliability results of miniaturized passive components (01005, 0201, 0402, 0603, 0805, and 1206) assembled using surface mounting processes with tin-lead and lead-free solder alloys under extreme temperature environments.

  19. Scheduler-Conscious Synchronization.

    DTIC Science & Technology

    1994-12-01

    SPONSORING I MONITORING Office of Naval Research ARPA AGENCY REPORT NUMBER Information Systems 3701 N. Fairfax Drive TR 550 Arlington VA 22217 Arlington VA...Broughton. A New Approach to Exclusive Data Access in Shared Memory Multiprocessors. Technical Report UCRL -97663, Lawrence Livermore National Laboratory

  20. Highway Traffic Simulations on Multi-Processor Computers

    DOT National Transportation Integrated Search

    1997-01-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high degree of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway ...

  1. Rapid, sensitive, and reusable detection of glucose by a robust radiofrequency integrated passive device biosensor chip.

    PubMed

    Kim, Nam-Young; Adhikari, Kishor Kumar; Dhakal, Rajendra; Chuluunbaatar, Zorigt; Wang, Cong; Kim, Eun-Soo

    2015-01-15

    Tremendous demands for sensitive and reliable label-free biosensors have stimulated intensive research into developing miniaturized radiofrequency resonators for a wide range of biomedical applications. Here, we report the development of a robust, reusable radiofrequency resonator based integrated passive device biosensor chip fabricated on a gallium arsenide substrate for the detection of glucose in water-glucose solutions and sera. As a result of the highly concentrated electromagnetic energy between the two divisions of an intertwined spiral inductor coupled with an interdigital capacitor, the proposed glucose biosensor chip exhibits linear detection ranges with high sensitivity at center frequency. This biosensor, which has a sensitivity of up to 199 MHz/mgmL(-1) and a short response time of less than 2 sec, exhibited an ultralow detection limit of 0.033 μM and a reproducibility of 0.61% relative standard deviation. In addition, the quantities derived from the measured S-parameters, such as the propagation constant (γ), impedance (Z), resistance (R), inductance (L), conductance (G) and capacitance (C), enabled the effective multi-dimensional detection of glucose.

  2. An Implantable RFID Sensor Tag toward Continuous Glucose Monitoring.

    PubMed

    Xiao, Zhibin; Tan, Xi; Chen, Xianliang; Chen, Sizheng; Zhang, Zijian; Zhang, Hualei; Wang, Junyu; Huang, Yue; Zhang, Peng; Zheng, Lirong; Min, Hao

    2015-05-01

    This paper presents a wirelessly powered implantable electrochemical sensor tag for continuous blood glucose monitoring. The system is remotely powered by a 13.56-MHz inductive link and utilizes an ISO 15693 radio frequency identification (RFID) standard for communication. This paper provides reliable and accurate measurement for changing glucose level. The sensor tag employs a long-term glucose sensor, a winding ferrite antenna, an RFID front-end, a potentiostat, a 10-bit sigma-delta analog to digital converter, an on-chip temperature sensor, and a digital baseband for protocol processing and control. A high-frequency external reader is used to power, command, and configure the sensor tag. The only off-chip support circuitry required is a tuned antenna and a glucose microsensor. The integrated chip fabricated in SMIC 0.13-μm CMOS process occupies an area of 1.2 mm ×2 mm and consumes 50 μW. The power sensitivity of the whole system is -4 dBm. The sensor tag achieves a measured glucose range of 0-30 mM with a sensitivity of 0.75 nA/mM.

  3. Long-Term Stability of Mold Compounds and the Influence on Semiconductor Device Reliability

    NASA Astrophysics Data System (ADS)

    Mahler, Joachim; Mengel, Manfred

    2012-07-01

    Lifetimes of semiconductor devices are specified according to the products and their applications to ensure safe operation, for instance as part of an automobile product. The long-term stability of the device is strongly dependent on the chip encapsulation and its adhesion to the chip and substrate. Molded silicon strips that act as a model system for molded chips inside semiconductor devices were investigated. Four commercially available mold compounds were applied on silicon strips and stored over 5 years at room temperature (RT), and changes in the thermomechanical behavior were analyzed. After storage, all molded strips exhibited warpage reduction in the range of 11% to 14% at RT with respect to the initial warpage. The temperatures for the stress-free state also changed during storage and were located between 228°C and 235°C for each mold. Additional stress applied to the stored modules, by temperature cycling as well as high-temperature storage, increased the warpage of the molded silicon samples. For further interpretation of measured results, finite-element method calculations were performed.

  4. Rapid, Sensitive, and Reusable Detection of Glucose by a Robust Radiofrequency Integrated Passive Device Biosensor Chip

    PubMed Central

    Kim, Nam-Young; Adhikari, Kishor Kumar; Dhakal, Rajendra; Chuluunbaatar, Zorigt; Wang, Cong; Kim, Eun-Soo

    2015-01-01

    Tremendous demands for sensitive and reliable label-free biosensors have stimulated intensive research into developing miniaturized radiofrequency resonators for a wide range of biomedical applications. Here, we report the development of a robust, reusable radiofrequency resonator based integrated passive device biosensor chip fabricated on a gallium arsenide substrate for the detection of glucose in water-glucose solutions and sera. As a result of the highly concentrated electromagnetic energy between the two divisions of an intertwined spiral inductor coupled with an interdigital capacitor, the proposed glucose biosensor chip exhibits linear detection ranges with high sensitivity at center frequency. This biosensor, which has a sensitivity of up to 199 MHz/mgmL−1 and a short response time of less than 2 sec, exhibited an ultralow detection limit of 0.033 μM and a reproducibility of 0.61% relative standard deviation. In addition, the quantities derived from the measured S-parameters, such as the propagation constant (γ), impedance (Z), resistance (R), inductance (L), conductance (G) and capacitance (C), enabled the effective multi-dimensional detection of glucose. PMID:25588958

  5. Improved color metrics in solid-state lighting via utilization of on-chip quantum dots

    NASA Astrophysics Data System (ADS)

    Mangum, Benjamin D.; Landes, Tiemo S.; Theobald, Brian R.; Kurtin, Juanita N.

    2017-02-01

    While Quantum Dots (QDs) have found commercial success in display applications, there are currently no widely available solid state lighting products making use of QD nanotechnology. In order to have real-world success in today's lighting market, QDs must be capable of being placed in on-chip configurations, as remote phosphor configurations are typically much more expensive. Here we demonstrate solid-state lighting devices made with on-chip QDs. These devices show robust reliability under both dry and wet high stress conditions. High color quality lighting metrics can easily be achieved using these narrow, tunable QD downconverters: CRI values of Ra > 90 as well as R9 values > 80 are readily available when combining QDs with green phosphors. Furthermore, we show that QDs afford a 15% increase in overall efficiency compared to traditional phosphor downconverted SSL devices. The fundamental limit of QD linewidth is examined through single particle QD emission studies. Using standard Cd-based QD synthesis, it is found that single particle linewidths of 20 nm FWHM represent a lower limit to the narrowness of QD emission in the near term.

  6. Validation of a Brief Structured Interview: The Children’s Interview for Psychiatric Syndromes (ChIPS)

    PubMed Central

    Young, Matthew E.; Bell, Ziv E.; Fristad, Mary A.

    2016-01-01

    Evidence-based assessment is important in the treatment of childhood psychopathology (Jensen-Doss, 2011). While researchers and clinicians frequently use structured diagnostic interviews to ensure reliability, the most commonly used instrument, the Schedule for Affective Disorders and Schizophrenia for School Aged Children (K-SADS; Kaufman et al., 1997), is too long for most clinical applications. The Children’s Interview for Psychiatric Syndromes, (ChIPS/P-ChIPS; Weller, Weller, Rooney, & Fristad, 1999a; 1999b) is a highly-structured brief diagnostic interview. The present study compared K-SADS and ChIPS/P-ChIPS diagnoses in an outpatient clinical sample of 50 parent-child pairs aged 7–14. Agreement between most diagnoses was moderate to high between instruments and with consensus clinical diagnoses. ChIPS was significantly briefer to administer than the K-SADS. Interviewer experience level and participant demographics did not appear to affect agreement. Results provide further evidence for the validity of the ChIPS and support its use in clinical and research settings. PMID:27761777

  7. Space Gator: a giant leap for fiber optic sensing

    NASA Astrophysics Data System (ADS)

    Evenblij, R. S.; Leijtens, J. A. P.

    2017-11-01

    Fibre Optic Sensing is a rapidly growing application field for Photonics Integrated Circuits (PIC) technology. PIC technology is regarded enabling for required performances and miniaturization of next generation fibre optic sensing instrumentation. So far a number of Application Specific Photonics Integrated Circuits (ASPIC) based interrogator systems have been realized as operational system-on-chip devices. These circuits have shown that all basic building blocks are working and complete interrogator on chip solutions can be produced. Within the Saristu (FP7) project several high reliability solutions for fibre optic sensing in Aeronautics are being developed, combining the specifically required performance aspects for the different sensing applications: damage detection, impact detection, load monitoring and shape sensing (including redundancy aspects and time division features). Further developments based on devices and taking into account specific space requirements (like radiation aspects) will lead to the Space Gator, which is a radiation tolerant highly integrated Fibre Bragg Grating (FBG) interrogator on chip. Once developed and qualified the Space Gator will be a giant leap for fibre optic sensing in future space applications.

  8. Intelligent microchip networks: an agent-on-chip synthesis framework for the design of smart and robust sensor networks

    NASA Astrophysics Data System (ADS)

    Bosse, Stefan

    2013-05-01

    Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.

  9. Silver flip chip interconnect technology and solid state bonding

    NASA Astrophysics Data System (ADS)

    Sha, Chu-Hsuan

    In this dissertation, fluxless transient liquid phase (TLP) bonding and solid state bonding between thermal expansion mismatch materials have been developed using Ag-In binary systems, pure Au, Ag, and Cu-Ag composite. In contrast to the conventional soldering process, fluxless bonding technique eliminates any corrosion and contamination problems caused by flux. Without flux, it is possible to fabricate high quality joints in large bonding areas where the flux is difficult to clean entirely. High quality joints are crucial to bonding thermal expansion mismatch materials since shear stress develops in the bonded pair. Stress concentration at voids in joints could increases breakage probability. In addition, intermetallic compound (IMC) formation between solder and underbump metallurgy (UBM) is essential for interconnect joint formation in conventional soldering process. However, the interface between IMC and solder is shown to be the weak interface that tends to break first during thermal cycling and drop tests. In our solid state bonding technique, there is no IMC involved in the bonding between Au to Au, Ag and Cu, and Ag and Au. All the reliability issues related to IMC or IMC growth is not our concern. To sum up, ductile bonding media, such as Ag or Au, and proper metallic layered structure are utilized in this research to produce high quality joints. The research starts with developing a low temperature fluxless bonding process using electroplated Ag/In/Ag multilayer structures between Si chip and 304 stainless steel (304SS) substrate. Because the outer thin Ag layer effectively protects inner In layer from oxidation, In layer dissolves Ag layer and joints to Ag layer on the to-be-bonded Si chip when temperature reaches the reflow temperature of 166ºC. Joints consist of mainly Ag-rich Ag-In solid solution and Ag2In. Using this fluxless bonding technique, two 304SS substrates can be bonded together as well. From the high magnification SEM images taken at cross-section, there is no void or gap observed. The new bonding technique presented should be valuable in packaging high power electronic devices for high temperature operations. It should also be useful to bond two 304SS parts together at low bonding temperature of 190ºC. Solid state bonding technique is then introduced to bond semiconductor chips, such as Si, to common substrates, such as Cu or alumina, using pure Ag and Au at a temperature matching the typical reflow temperature used in packaging industries, 260°C. In bonding, we realize the possibilities of solid state bonding of Au to Au, Au to Ag, and Ag to Cu. The idea comes from that Cu, Ag, and Au are located in the same column on periodic table, meaning that they have similar electronic configuration. They therefore have a better chance to share electrons. Also, the crystal lattice of Cu, Ag, and Au is the same, face-centered cubic. In the project, the detailed bonding mechanism is beyond the scope and here we determine the bonding by the experimental result. Ag is chosen as the joint material because of its superior physical properties. It has the highest electrical and thermal conductivities among all metals. It has low yield strength and is relatively ductile. Au is considered as well because its excellent ductility and fatigue resistance. Thus, the Ag or Au joints can deform to accommodate the shear strain caused by CTE mismatch between Si and Cu. Ag and Au have melting temperatures higher than 950°C, so the pure Ag or Au joints are expected to sustain in high operating temperature. The resulting joints do not contain any intermetallic compound. Thus, all reliability issues associated with intermetallic growth in commonly used solder joints do not exist anymore. We finally move to the applications of solid state Ag bonding in flip chip interconnects design. At present, nearly all large-scale integrated circuit (IC) chips are packaged with flip-chip technology. This means that the chip is flipped over and the active (front) side is connected to the package using a large number of tiny solder joints, which provide mechanical support, electrical connection, and heat conduction. For chip-to-package level interconnects, a challenge is the severe mismatch in coefficient of thermal expansion (CTE) between chips and package substrates. The interconnect material thus needs to be compliant to deal with the CTE mismatch. At present, nearly all flip-chip interconnects in electronic industries are made of lead-free Sn-based solders. Soft solders are chosen due to high ductility, low yield strength, relatively low melting temperature, and reasonably good electrical and thermal conductivities. In the never ending scaling down trend, more and more transistors are placed on the same Si chip size. This results in larger pin-out numbers and smaller solder joints. According to International Technology Roadmap for Semiconductors (ITRS), by 2018, the pitch in flip-chip interconnects will become smaller than 70mum for high performance applications. Two problems occur. The first is increase in shear strain. The aspect ratio of flip-chip joints is constrained to 0.7 because it goes through molten phase in the reflow process. Therefore, smaller joints become shorter as well, resulting in larger shear strain arising from CTE mismatch between Si chips and package substrates. The second is increase in stress in the joints. Since intermetallic (IMC) thickness in the joint does not scale down with joint size, ratio of IMC thickness to joint height increases. This further enlarges the shear stress because the IMC does not deform as the soft solder does to accommodate CTE mismatch. In this research, the smallest dimension we achieve for Ag flip chip interconnect joint is 15mum in diameter. The ten advantages of Ag flip chip interconnect technology can be identified as (a) High electrical conductivity, 7.7 times of that of Pb-free solders, (b) High thermal conductivity, 5.2 times of that of Pb-free solders, (c) Completely fluxless, (d) No IMCs; all reliability issues associated with IMC and IMC growth do not exist, (e) Ag is very ductile and can manage CTE mismatch between chips and packages, (f) Ag joints can sustain at very high operation temperature because Ag has high melting temperature of 961°C, (g) No molten phase involved; the bump can better keep its shape and geometry, (h) No molten phase involved; bridging of adjacent bumps is less likely to occur, i. Aspect ratio of bumps can be made greater than 1, (j) The size of the bumps is only limited by the lithographic process. Cu-Ag composite flip chip interconnect joints is developed based on three reasons. The first is lower material cost. The second is to strengthen the columns because the yield strength of Cu is 6 times of that of Ag. The third is to avoid possible Ag migration between Ag electrodes under voltage at temperatures above 250°C. This Cu-Ag composite design presents a solution in the path to the scale down roadmap.

  10. Behavior of stress generated in semiconductor chips with high-temperature joints: Influence of mechanical properties of joint materials

    NASA Astrophysics Data System (ADS)

    Ito, H.; Kuwahara, M.; Ohta, R.; Usui, M.

    2018-04-01

    High-temperature joint materials are indispensable to realizing next-generation power modules with high-output performance. However, crack initiation resulting from stress concentration in semiconductor chips joined with high-temperature joint materials remains a critical problem in high-temperature operation. Therefore, clarifying the quantitative influence of joint materials on the stress generated in chips is essential. This study investigates the stress behavior of chips joined by Ni-Sn solid-liquid interdiffusion (SLID), which results in a high-temperature joint material likely to generate cracks after joining or when under thermal cycling. The results are compared with those fabricated using three types of solders, Pb-10%Sn, Sn-0.7%Cu, and Sn-10%Sb (mass %), which are conventional joint materials with different melting points and mechanical properties. Using Ni-Sn SLID results in the generation of high compressive stress (500 MPa) without stress relaxation after the joining process in contrast to the case of solders in which the compressive stresses are low (<300 MPa) and decrease to still lower levels (<250 MPa). In addition, no stress relaxation occurs during thermal cycling when using Ni-Sn SLID, whereas stress relaxation is clearly observed during heating to 200 °C using solders. Different stress behaviors between Ni-Sn SLID and other joint materials are illustrated by their mechanical strength and resistance against plastic and creep deformation. These results suggest that stress relaxation in a chip is key in suppressing crack initiation in highly reliable modules during high-temperature operation.

  11. Chip Scale Package Integrity Assessment by Isothermal Aging

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1998-01-01

    Many aspects of chip scale package (CSP) technology, with focus on assembly reliability characteristics, are being investigated by the JPL-led consortia. Three types of test vehicles were considered for evaluation and currently two configurations have been built to optimize attachment processes. These test vehicles use numerous package types. To understand potential failure mechanisms of the packages, particularly solder ball attachment, the grid CSPs were subjected to environmental exposure. Package I/Os ranged from 40 to nearly 300. This paper presents both as assembled, up to 1, 000 hours of isothermal aging shear test results and photo micrographs, and tensile test results before and after 1,500 cycles in the range of -30/100 C for CSPs. Results will be compared to BGAs with the same the same isothermal aging environmental exposures.

  12. Microfluidics-Based Lab-on-Chip Systems in DNA-Based Biosensing: An Overview

    PubMed Central

    Dutse, Sabo Wada; Yusof, Nor Azah

    2011-01-01

    Microfluidics-based lab-on-chip (LOC) systems are an active research area that is revolutionising high-throughput sequencing for the fast, sensitive and accurate detection of a variety of pathogens. LOCs also serve as portable diagnostic tools. The devices provide optimum control of nanolitre volumes of fluids and integrate various bioassay operations that allow the devices to rapidly sense pathogenic threat agents for environmental monitoring. LOC systems, such as microfluidic biochips, offer advantages compared to conventional identification procedures that are tedious, expensive and time consuming. This paper aims to provide a broad overview of the need for devices that are easy to operate, sensitive, fast, portable and sufficiently reliable to be used as complementary tools for the control of pathogenic agents that damage the environment. PMID:22163925

  13. Fabricating microfluidic valve master molds in SU-8 photoresist

    NASA Astrophysics Data System (ADS)

    Dy, Aaron J.; Cosmanescu, Alin; Sluka, James; Glazier, James A.; Stupack, Dwayne; Amarie, Dragos

    2014-05-01

    Multilayer soft lithography has become a powerful tool in analytical chemistry, biochemistry, material and life sciences, and medical research. Complex fluidic micro-circuits require reliable components that integrate easily into microchips. We introduce two novel approaches to master mold fabrication for constructing in-line micro-valves using SU-8. Our fabrication techniques enable robust and versatile integration of many lab-on-a-chip functions including filters, mixers, pumps, stream focusing and cell-culture chambers, with in-line valves. SU-8 created more robust valve master molds than the conventional positive photoresists used in multilayer soft lithography, but maintained the advantages of biocompatibility and rapid prototyping. As an example, we used valve master molds made of SU-8 to fabricate PDMS chips capable of precisely controlling beads or cells in solution.

  14. Affordable MMICs for Air Force systems

    NASA Astrophysics Data System (ADS)

    Kemerley, Robert T.; Fayette, Daniel F.

    1991-05-01

    The paper deals with a program directed at demonstrating affordable MMIC chips - the microwave/mm-wave monolithic integrated circuit (MIMIC) program. Focus is placed on experiments involving the growth and characterization of III-V materials, and the design, fabrication, and evaluation of ICs in the 1 to 60 GHz frequency range, as well as efforts related to the reliability testing, failure analysis, and generation of qualified manufacture's list procedures for GaAs MMICs and modules. Attributes associated with GaAs-technology devices, quality, reliability, and performance in select environments are discussed, including the dependence of these structures over temperature ranges, electrostatic discharge sensitivity, and susceptibility to environmental stresses.

  15. New Surface-Enhanced Raman Sensing Chip Designed for On-Site Detection of Active Ricin in Complex Matrices Based on Specific Depurination.

    PubMed

    Tang, Ji-Jun; Sun, Jie-Fang; Lui, Rui; Zhang, Zong-Mian; Liu, Jing-Fu; Xie, Jian-Wei

    2016-01-27

    Quick and accurate on-site detection of active ricin has very important realistic significance in view of national security and defense. In this paper, optimized single-stranded oligodeoxynucleotides named poly(21dA), which function as a depurination substrate of active ricin, were screened and chemically attached on gold nanoparticles (AuNPs, ∼100 nm) via the Au-S bond [poly(21dA)-AuNPs]. Subsequently, poly(21dA)-AuNPs were assembled on a dihydrogen lipoic-acid-modified Si wafer (SH-Si), thus forming the specific surface-enhanced Raman spectroscopy (SERS) chip [poly(21dA)-AuNPs@SH-Si] for depurination of active ricin. Under optimized conditions, active ricin could specifically hydrolyze multiple adenines from poly(21dA) on the chip. This depurination-induced composition change could be conveniently monitored by measuring the distinct attenuation of the SERS signature corresponding to adenine. To improve sensitivity of this method, a silver nanoshell was deposited on post-reacted poly(21dA)-AuNPs, which lowered the limit of detection to 8.9 ng mL(-1). The utility of this well-controlled SERS chip was successfully demonstrated in food and biological matrices spiked with different concentrations of active ricin, thus showing to be very promising assay for reliable and rapid on-site detection of active ricin.

  16. Distillation and detection of SO2 using a microfluidic chip.

    PubMed

    Ju, Wei-Jhong; Fu, Lung-Ming; Yang, Ruey-Jen; Lee, Chia-Lun

    2012-02-07

    A miniaturized distillation system is presented for separating sulfurous acid (H(2)SO(3)) into sulfur dioxide (SO(2)) and water (H(2)O). The major components of the proposed system include a microfluidic distillation chip, a power control module, and a carrier gas pressure control module. The microfluidic chip is patterned using a commercial CO(2) laser and comprises a serpentine channel, a heating zone, a buffer zone, a cooling zone, and a collection tank. In the proposed device, the H(2)SO(3) solution is injected into the microfluidic chip and is separated into SO(2) and H(2)O via an appropriate control of the distillation time and temperature. The gaseous SO(2) is then transported into the collection chamber by the carrier gas and is mixed with DI water. Finally, the SO(2) concentration is deduced from the absorbance measurements obtained using a spectrophotometer. The experimental results show that a correlation coefficient of R(2) = 0.9981 and a distillation efficiency as high as 94.6% are obtained for H(2)SO(3) solutions with SO(2) concentrations in the range of 100-500 ppm. The SO(2) concentrations of two commercial red wines are successfully detected using the developed device. Overall, the results presented in this study show that the proposed system provides a compact and reliable tool for SO(2) concentration measurement purposes.

  17. Manual-slide-engaged paper chip for parallel SERS-immunoassay measurement of clenbuterol from swine hair.

    PubMed

    Zheng, Tingting; Gao, Zhigang; Luo, Yong; Liu, Xianming; Zhao, Weijie; Lin, Bingcheng

    2016-02-01

    Clenbuterol (CL), as a feed additive, has been banned in many countries due to its potential threat to human health. In detection of CL, a fast, low-cost technique with high accuracy and specificity would be ideal for its administrative on-field inspections. Among the attempts to pursue a reliable detection tool of CL, a technique that combines surface enhanced Raman spectroscopy (SERS) and immunoassay, is close to meet the requirements as above. However, multiple steps of interactions between CL analyte, antibody, and antigen are involved in this method, and under conventional setup, the operation of SERS/immunoassay were unwieldy. In this paper, to facilitate a more manageable sample manipulation for SERS-immunoassay measurement, a 3D paper chip was suggested. A switch-on-chip multilayered (abbreviated as SoCM-) microfluidic paper-based analysis device (μPad) was fabricated to provide operators with manual switches on the interactions between different microfluids. Besides, on a detection slip we made on the main body of our SoCM-μPad, antigen was anchored in pattern. With this architecture, multistep interactions between the CL analyte in swine hair extract and the SERS probe-modified antibody and antigen, were managed for on-chip SERS-immunoassay detection. This would be very attractive for fast, cheap, accurate, and on-site specific detection of CL from real samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Novel First-Level Interconnect Techniques for Flip Chip on MEMS Devices

    PubMed Central

    Sutanto, Jemmy; Anand, Sindhu; Patel, Chetan; Muthuswamy, Jit

    2013-01-01

    Flip-chip packaging is desirable for microelectro-mechanical systems (MEMS) devices because it reduces the overall package size and allows scaling up the number of MEMS chips through 3-D stacks. In this report, we demonstrate three novel techniques to create first-level interconnect (FLI) on MEMS: 1) Dip and attach technology for Ag epoxy; 2) Dispense technology for solder paste; 3) Dispense, pull, and attach technology (DPAT) for solder paste. The above techniques required no additional microfabrication steps, produced no visible surface contamination on the MEMS active structures, and generated high-aspect-ratio interconnects. The developed FLIs were successfully tested on MEMS moveable microelectrodes microfabricated by SUMMiTVTM process producing no apparent detrimental effect due to outgassing. The bumping processes were successfully applied on Al-deposited bond pads of 100 μm × 100 μm with an average bump height of 101.3 μm for Ag and 184.8 μm for solder (63Sn, 37Pb). DPAT for solder paste produced bumps with the aspect ratio of 1.8 or more. The average shear strengths of Ag and solder bumps were 78 MPa and 689 kPa, respectively. The electrical test on Ag bumps at 794 A/cm2 demonstrated reliable electrical interconnects with negligible resistance. These scalable FLI technologies are potentially useful for MEMS flip-chip packaging and 3-D stacking. PMID:24504168

  19. Nanophotonic lab-on-a-chip platforms including novel bimodal interferometers, microfluidics and grating couplers.

    PubMed

    Duval, Daphné; González-Guerrero, Ana Belén; Dante, Stefania; Osmond, Johann; Monge, Rosa; Fernández, Luis J; Zinoviev, Kirill E; Domínguez, Carlos; Lechuga, Laura M

    2012-05-08

    One of the main limitations for achieving truly lab-on-a-chip (LOC) devices for point-of-care diagnosis is the incorporation of the "on-chip" detection. Indeed, most of the state-of-the-art LOC devices usually require complex read-out instrumentation, losing the main advantages of portability and simplicity. In this context, we present our last advances towards the achievement of a portable and label-free LOC platform with highly sensitive "on-chip" detection by using nanophotonic biosensors. Bimodal waveguide interferometers fabricated by standard silicon processes have been integrated with sub-micronic grating couplers for efficient light in-coupling, showing a phase resolution of 6.6 × 10(-4)× 2π rad and a limit of detection of 3.3 × 10(-7) refractive index unit (RIU) in bulk. A 3D network of SU-8 polymer microfluidics monolithically assembled at the wafer-level was included, ensuring perfect sealing and compact packaging. To overcome some of the drawbacks inherent to interferometric read-outs, a novel all-optical wavelength modulation system has been implemented, providing a linear response and a direct read-out of the phase variation. Sensitivity, specificity and reproducibility of the wavelength modulated BiMW sensor has been demonstrated through the label-free immunodetection of the human hormone hTSH at picomolar level using a reliable biofunctionalization process.

  20. Capacitor bonding techniques and reliability. [thermal cycling tests

    NASA Technical Reports Server (NTRS)

    Kinser, D. L.; Graff, S. M.; Allen, R. V.; Caruso, S. V.

    1974-01-01

    The effect of thermal cycling on the mechanical failure of bonded ceramic chip capacitors mounted on alumina substrates is studied. It is shown that differential thermal expansion is responsible for the cumulative effects which lead to delayed failure of the capacitors. Harder or higher melting solders are found to be less susceptible to thermal cycling effects, although they are more likely to fail during initial processing operations.

  1. Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling

    NASA Astrophysics Data System (ADS)

    Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.

    2018-02-01

    It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McAdams, Brian J.; Pearson, Raymond A.

    With the continuing trend of decreasing feature sizes in flip-chip assemblies, the reliability tolerance to interfacial flaws is also decreasing. Small-scale disbonds will become more of a concern, pointing to the need for a better understanding of the initiation stage of interfacial delamination. With most accepted adhesion metric methodologies tailored to predict failure under the prior existence of a disbond, the study of the initiation phenomenon is open to development and standardization of new testing procedures. Traditional fracture mechanics approaches are not suitable, as the mathematics assume failure to originate at a disbond or crack tip. Disbond initiation is believedmore » to first occur at free edges and corners, which act as high stress concentration sites and exhibit singular stresses similar to a crack tip, though less severe in intensity. As such, a 'fracture mechanics-like' approach may be employed which defines a material parameter--a critical stress intensity factor (K{sub c})--that can be used to predict when initiation of a disbond at an interface will occur. The factors affecting the adhesion of underfill/polyimide interfaces relevant to flip-chip assemblies were investigated in this study. The study consisted of two distinct parts: a comparison of the initiation and propagation phenomena and a comparison of the relationship between sub-critical and critical initiation of interfacial failure. The initiation of underfill interfacial failure was studied by characterizing failure at a free-edge with a critical stress intensity factor. In comparison with the interfacial fracture toughness testing, it was shown that a good correlation exists between the initiation and propagation of interfacial failures. Such a correlation justifies the continuing use of fracture mechanics to predict the reliability of flip-chip packages. The second aspect of the research involved fatigue testing of tensile butt joint specimens to determine lifetimes at sub-critical load levels. The results display an interfacial strength ranking similar to that observed during monotonic testing. The fatigue results indicate that monotonic fracture mechanics testing may be an adequate screening tool to help predict cyclic underfill failure; however lifetime data is required to predict reliability.« less

  3. Evaluation of the Cedar memory system: Configuration of 16 by 16

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Wijshoff, H.

    1991-01-01

    Some basic results on the performance of the Cedar multiprocessor system are presented. Empirical results on the 16 processor 16 memory bank system configuration, which show the behavior of the Cedar system under different modes of operation are presented.

  4. The " Swarm of Ants vs. Herd of Elephants" Debated Revisited: Performance Measurements of PVM-Overflow Across a Wide Spectrum of Architectures

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Jespersen, Dennis; Buning, Peter; Bailey, David (Technical Monitor)

    1996-01-01

    The Gorden Bell Prizes given out at Supercomputing every year includes at least two catergories: performance (highest GFLOP count) and price-performance (GFLOP/million $$) for real applications. In the past five years, the winners of the price-performance categories all came from networks of work-stations. This reflects three important facts: 1. supercomputers are still too expensive for the masses; 2. achieving high performance for real applications takes real work; and, most importantly; 3. it is possible to obtain acceptable performance for certain real applications on network of work stations. With the continued advance of network technology as well as increased performance of "desktop" workstation, the "Swarm of Ants vs. Herd of Elephants" debate, which began with vector multiprocessors (VPPs) against SIMD type multiprocessors (e.g. CM2), is now recast as VPPs against Symetric Multiprocessors (SMPs, e.g. SGI PowerChallenge). This paper reports on performance studies we performed solving a large scale (2-million grid pt.s) CFD problem involving a Boeing 747 based on a parallel version of OVERFLOW that utilizes message passing on PVM. A performance monitoring tool developed under NASA HPCC, called AIMS, was used to instrument and analyze the the performance data thus obtained. We plan to compare its performance data obtained across a wide spectrum of architectures including: the Cray C90, IBM/SP2, SGI/Power Challenge Cluster, to a group of workstations connected over a simple network. The metrics of comparison includes speed-up, price-performance, throughput, and turn-around time. We also plan to present a plan of attack for various issues that will make the execution of Grand Challenge Applications across the Global Information Infrastructure a reality.

  5. Improved Identification of Membrane Proteins by MALDI-TOF MS/MS Using Vacuum Sublimated Matrix Spots on an Ultraphobic Chip Surface

    PubMed Central

    Poetsch, Ansgar; Schlüsener, Daniela; Florizone, Christine; Eltis, Lindsay; Menzel, Christoph; Rögner, Matthias; Steinert, Kerstin; Roth, Udo

    2008-01-01

    Integral membrane proteins are notoriously difficult to identify and analyze by mass spectrometry because of their low abundance and limited number of trypsin cleavage sites. Our strategy to address this problem is based on a novel technology for MALDI-MS peptide sample preparation that increases the success rate of membrane protein identification by increasing the sensitivity of the MALDI-TOF system. For this, we used sample plates with predeposited matrix spots of CHCA crystals prepared by vacuum sublimation onto an extremely low wettable (ultraphobic) surface. In experiments using standard peptides, an up to 10-fold gain of sensitivity was found for on-chip preparations compared with classical dried-droplet preparations on a steel target. In order to assess the performance of the chips with membrane proteins, three model proteins (bacteriorhodopsin, subunit IV(a) of ATP synthase, and the cp47 subunit from photosystem II) were analyzed. To mimic realistic analysis conditions, purified proteins were separated by SDS-PAGE and digested with trypsin. The digest MALDI samples were prepared either by dried-droplet technique on steel plates using CHCA as matrix, or applied directly onto the matrix spots of the chip surface. Significantly higher signal-to-noise ratios were observed for all of the spectra resulting from on-chip preparations of different peptides. In a second series of experiments, the membrane proteome of Rhodococcus jostii RHA1 was investigated by AIEC/SDS-PAGE in combination with MALDI-TOF MS/MS. As in the first experiments, Coomassie-stained SDS-PAGE bands were digested and the two different preparation methods were compared. For preparations on the Mass·Spec·Turbo Chip, 43 of 60 proteins were identified, whereas only 30 proteins were reliably identified after classical sample preparation. Comparison of the obtained Mascot scores, which reflect the confidence level of the protein identifications, revealed that for 70% of the identified proteins, higher scores were obtained by on-chip sample preparation. Typically, this gain was a consequence of higher sequence coverage due to increased sensitivity. PMID:19137096

  6. Numerical simulation of CTE mismatch and thermal-structural stresses in the design of interconnects

    NASA Astrophysics Data System (ADS)

    Peter, Geoffrey John M.

    With the ever-increasing chip complexity, interconnects have to be designed to meet the new challenges. Advances in optical lithography have made chip feature sizes available today at 70 nm dimensions. With advances in Extreme Ultraviolet Lithography, X-ray Lithography, and Ion Projection Lithography it is expected that the line width will further decrease to 20 nm or less. With the decrease in feature size, the number of active devices on the chip increases. With higher levels of circuit integration, the challenge is to dissipate the increased heat flux from the chip surface area. Thermal management considerations include coefficient of thermal expansion (CTE) matching to prevent failure between the chip and the board. This in turn calls for improved system performance and reliability of the electronic structural systems. Experience has shown that in most electronic systems, failures are mostly due to CTE mismatch between the chip, board, and the solder joint (solder interconnect). The resulting high thermal-structural stress and strain due to CTE mismatch produces cracks in the solder joints with eventual failure of the electronic component. In order to reduce the thermal stress between the chip, board, and the solder joint, this dissertation examines the effect of inserting wire bundle (wire interconnect) between the chip and the board. The flexibility of the wires or fibers would reduce the stress at the rigid joints. Numerical simulations of two, and three-dimensional models of the solder and wire interconnects are examined. The numerical simulation is linear in nature and is based on linear isotropic material properties. The effect of different wire material properties is examined. The effect of varying the wire diameter is studied by changing the wire diameter. A major cause of electronic equipment failure is due to fatigue failure caused by thermal cycling, and vibrations. A two-dimensional modal and harmonic analysis was simulated for the wire interconnect and the solder interconnect. The numerical model simulated using ANSYS program was validated with the numerical/experimental results of other published researchers. In addition the results were cross-checked by IDEAS program. A prototype non-working wire interconnect is proposed to emphasize practical application. The numerical analysis, in this dissertation is based on a U.S. Patent granted to G. Peter(42).

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, R.; Ebersberger, B.; Kupfer, C.

    SnAg solder bump is one bump type which is used to replace eutectic SnPb bumps. In this work tests have been done to characterize the reliability properties of this bump type. Electromigration (EM) tests, which were accelerated by high current and high temperature and high temperature storage (HTS) tests were performed. It was found that the reliability properties are sensitive to the material combinations in the interconnect stack. The interconnect stack includes substrate pad, pad finish, bump, underbump metallization (UBM) and the chip pad. Therefore separate test groups for SnAg bumps on Cu substrate pads with organic solderability preservative (OSP)more » finish and the identical bumps on pads with Ni/Au finish were used. In this paper the reliability test results and the corresponding failure analysis are presented. Some explanations about the differences in formation of intermetallic compounds (IMCs) are given.« less

  8. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  9. Performance and economy of a fault-tolerant multiprocessor

    NASA Technical Reports Server (NTRS)

    Lala, J. H.; Smith, C. J.

    1979-01-01

    The FTMP (Fault-Tolerant Multiprocessor) is one of two central aircraft fault-tolerant architectures now in the prototype phase under NASA sponsorship. The intended application of the computer includes such critical real-time tasks as 'fly-by-wire' active control and completely automatic Category III landings of commercial aircraft. The FTMP architecture is briefly described and it is shown that it is a viable solution to the multi-faceted problems of safety, speed, and cost. Three job dispatch strategies are described, and their results with respect to job-starting delay are presented. The first strategy is a simple First-Come-First-Serve (FCFS) job dispatch executive. The other two schedulers are an adaptive FCFS and an interrupt driven scheduler. Three failure modes are discussed, and the FTMP survival probability in the face of random hard failures is evaluated. It is noted that the hourly cost of operating two FTMPs in a transport aircraft can be as little as one-to-two percent of the total flight-hour cost of the aircraft.

  10. A design fix to supervisory control for fault-tolerant scheduling of real-time multiprocessor systems with aperiodic tasks

    NASA Astrophysics Data System (ADS)

    Devaraj, Rajesh; Sarkar, Arnab; Biswas, Santosh

    2015-11-01

    In the article 'Supervisory control for fault-tolerant scheduling of real-time multiprocessor systems with aperiodic tasks', Park and Cho presented a systematic way of computing a largest fault-tolerant and schedulable language that provides information on whether the scheduler (i.e., supervisor) should accept or reject a newly arrived aperiodic task. The computation of such a language is mainly dependent on the task execution model presented in their paper. However, the task execution model is unable to capture the situation when the fault of a processor occurs even before the task has arrived. Consequently, a task execution model that does not capture this fact may possibly be assigned for execution on a faulty processor. This problem has been illustrated with an appropriate example. Then, the task execution model of Park and Cho has been modified to strengthen the requirement that none of the tasks are assigned for execution on a faulty processor.

  11. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  12. Sequoia: A fault-tolerant tightly coupled multiprocessor for transaction processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, P.A.

    1988-02-01

    The Sequoia computer is a tightly coupled multiprocessor, and thus attains the performance advantages of this style of architecture. It avoids most of the fault-tolerance disadvantages of tight coupling by using a new fault-tolerance design. The Sequoia architecture is similar to other multimicroprocessor architectures, such as those of Encore and Sequent, in that it gives dozens of microprocessors shared access to a large main memory. It resembles the Stratus architecture in its extensive use of hardware fault-detection techniques. It resembles Stratus and Auragen in its ability to quickly recover all processes after a single point failure, transparently to the user.more » However, Sequoia is unique in its combination of a large-scale tightly coupled architecture with a hardware approach to fault tolerance. This article gives an overview of how the hardware architecture and operating systems (OS) work together to provide a high degree of fault tolerance with good system performance.« less

  13. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  14. Multitasking runtime systems for the Cedar Multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzzi, M.D.

    1986-07-01

    The programming of a MIMD machine is more complex than for SISD and SIMD machines. The multiple computational resources of the machine must be made available to the programming language compiler and to the programmer so that multitasking programs may be written. This thesis will explore the additional complexity of programming a MIMD machine, the Cedar Multiprocessor specifically, and the multitasking runtime system necessary to provide multitasking resources to the user. First, the problem will be well defined: the Cedar machine, its operating system, the programming language, and multitasking concepts will be described. Second, a solution to the problem, calledmore » macrotasking, will be proposed. This solution provides multitasking facilities to the programmer at a very coarse level with many visible machine dependencies. Third, an alternate solution, called microtasking, will be proposed. This solution provides multitasking facilities of a much finer grain. This solution does not depend so rigidly on the specific architecture of the machine. Finally, the two solutions will be compared for effectiveness. 12 refs., 16 figs.« less

  15. Operating experience with a VMEbus multiprocessor system for data acquisition and reduction in nuclear physics

    NASA Astrophysics Data System (ADS)

    Kutt, P. H.; Balamuth, D. P.

    1989-10-01

    Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.

  16. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  17. OPAD-EDIFIS Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1997-01-01

    The Optical Plume Anomaly Detection (OPAD) detects engine hardware degradation of flight vehicles through identification and quantification of elemental species found in the plume by analyzing the plume emission spectra in a real-time mode. Real-time performance of OPAD relies on extensive software which must report metal amounts in the plume faster than once every 0.5 sec. OPAD software previously written by NASA scientists performed most necessary functions at speeds which were far below what is needed for real-time operation. The research presented in this report improved the execution speed of the software by optimizing the code without changing the algorithms and converting it into a parallelized form which is executed in a shared-memory multiprocessor system. The resulting code was subjected to extensive timing analysis. The report also provides suggestions for further performance improvement by (1) identifying areas of algorithm optimization, (2) recommending commercially available multiprocessor architectures and operating systems to support real-time execution and (3) presenting an initial study of fault-tolerance requirements.

  18. Parallel Navier-Stokes computations on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Jayasimha, D. N.; Pillay, Sasi Kumar

    1995-01-01

    We study a high order finite difference scheme to solve the time accurate flow field of a jet using the compressible Navier-Stokes equations. As part of our ongoing efforts, we have implemented our numerical model on three parallel computing platforms to study the computational, communication, and scalability characteristics. The platforms chosen for this study are a cluster of workstations connected through fast networks (the LACE experimental testbed at NASA Lewis), a shared memory multiprocessor (the Cray YMP), and a distributed memory multiprocessor (the IBM SPI). Our focus in this study is on the LACE testbed. We present some results for the Cray YMP and the IBM SP1 mainly for comparison purposes. On the LACE testbed, we study: (1) the communication characteristics of Ethernet, FDDI, and the ALLNODE networks and (2) the overheads induced by the PVM message passing library used for parallelizing the application. We demonstrate that clustering of workstations is effective and has the potential to be computationally competitive with supercomputers at a fraction of the cost.

  19. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  20. Proceedings of the second SISAL users` conference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J T; Frerking, C; Miller, P J

    1992-12-01

    This report contains papers on the following topics: A sisal code for computing the fourier transform on S{sub N}; five ways to fill your knapsack; simulating material dislocation motion in sisal; candis as an interface for sisal; parallelisation and performance of the burg algorithm on a shared-memory multiprocessor; use of genetic algorithm in sisal to solve the file design problem; implementing FFT`s in sisal; programming and evaluating the performance of signal processing applications in the sisal programming environment; sisal and Von Neumann-based languages: translation and intercommunication; an IF2 code generator for ADAM architecture; program partitioning for NUMA multiprocessor computer systems;more » mapping functional parallelism on distributed memory machines; implicit array copying: prevention is better than cure ; mathematical syntax for sisal; an approach for optimizing recursive functions; implementing arrays in sisal 2.0; Fol: an object oriented extension to the sisal language; twine: a portable, extensible sisal execution kernel; and investigating the memory performance of the optimizing sisal compiler.« less

  1. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    NASA Astrophysics Data System (ADS)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  2. Estimating minimal important differences for several scales assessing function and quality of life in patients with attention-deficit/hyperactivity disorder.

    PubMed

    Hodgkins, Paul; Lloyd, Andrew; Erder, M Haim; Setyawan, Juliana; Weiss, Margaret D; Sasané, Rahul; Nafees, Beenish

    2017-02-01

    Defining minimal important difference (MID) is critical to interpreting patient-reported outcomes data and treatment efficacy in clinical trials. This study estimates the MID for the Weiss Functional Impairment Rating Scale-Parent Report (WFIRS-P) and the Child Health and Illness Profile-Parent Report (CHIP-CE-PRF76) among parents of young people with attention-deficit/hyperactivity disorder (ADHD) in the UK. Parents of children (6-12 years; n=100) and adolescents (13-17 years; n=117) with ADHD completed a socio-demographic form, the CHIP-CE-PRF76, the WFIRS-P, and the Pediatric Quality of Life scale at baseline and 4 weeks later. At follow-up, a subset of parents completed anchor questions measuring change in the child/adolescent from baseline. MIDs were estimated using anchor-based and distribution-based methods, and separately for children and adolescents. The MID estimates for overall change in the WFIRS-P total score ranged from 11.31 (standard error of measurement) to 13.47 (anchor) for the total sample. The range of MID estimates for the CHIP-CE-PRF76 varied by domain: 6.80-7.41 (satisfaction), 6.18-7.34 (comfort), 5.60-6.72 (resilience), 6.06-7.57 (risk avoidance), and 4.00-5.63 (achievement) for the total sample. Overall, MID estimates for WFIRS-P MID and CHIP-CE-PRF76 were slightly higher for adolescents than for children. This study estimated MIDs for these instruments using several methods. The observed convergence of the MID estimates increases confidence in their reliability and could assist clinicians and decision makers in deriving meaningful interpretations of observed changes in the WFIRS-P and CHIP-CE in clinical trials and practice.

  3. Systematic analysis of CMOS-micromachined inductors with application to mixer matching circuits

    NASA Astrophysics Data System (ADS)

    Wu, Jerry Chun-Li

    The growing demand for consumer voice and data communication systems and military communication applications has created a need for low-power, low-cost, high-performance radio-frequency (RF) front-end. To achieve this goal, bringing passive components, especially inductors, to silicon is imperative. On-chip passive components such as inductors and capacitors generally enhance the reliability and efficiency of silicon-integrated RF cells. They can provide circuit solutions with superior performance and contribute to a higher level of integration. With passive components on chip, there is a great opportunity to have transformers, filters, and matching networks on chip. However, inductors on silicon have a low quality factor (Q) due to both substrate and metal loss. This dissertation demonstrates the systematic analysis of inductors fabricated using standard complementary metal-oxide-semiconductor (CMOS) and micro-electro-mechanical (MEMS) system technologies. We report system-on-chip inductor modeling, simulation, and measurements of effective inductance and quality factors. In this analysis methodology, a number of systematic simulations are performed on regular and micromachined inductors with different parameters such as spiral topology, number of turns, outer diameter, thickness, and percentage of substrate removed by using micromachining technologies. Three different novel support structures of the micromachined spiral inductor are proposed, analyzed, and implemented for larger size suspended inductors. The sensitivity of the structure support and different degree of substrate etching by post-processing is illustrated. The results provide guidelines for the selection of inductor parameters, post-processing methodologies, and its spiral supports to meet the RF design specifications and the stability requirements for mobile communication. The proposed CMOS-micromachined inductor is used in a low cost-effective double-balanced Gilbert mixer with on-chip matching network. The integrated mixer inductor was implemented and tested to prove the concept.

  4. Recent developments in microfluidic large scale integration.

    PubMed

    Araci, Ismail Emre; Brisk, Philip

    2014-02-01

    In 2002, Thorsen et al. integrated thousands of micromechanical valves on a single microfluidic chip and demonstrated that the control of the fluidic networks can be simplified through multiplexors [1]. This enabled realization of highly parallel and automated fluidic processes with substantial sample economy advantage. Moreover, the fabrication of these devices by multilayer soft lithography was easy and reliable hence contributed to the power of the technology; microfluidic large scale integration (mLSI). Since then, mLSI has found use in wide variety of applications in biology and chemistry. In the meantime, efforts to improve the technology have been ongoing. These efforts mostly focus on; novel materials, components, micromechanical valve actuation methods, and chip architectures for mLSI. In this review, these technological advances are discussed and, recent examples of the mLSI applications are summarized. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. 5-Gb/s 0.18-μm CMOS 2:1 multiplexer with integrated clock extraction

    NASA Astrophysics Data System (ADS)

    Changchun, Zhang; Zhigong, Wang; Si, Shi; Peng, Miao; Ling, Tian

    2009-09-01

    A 5-Gb/s 2:1 MUX (multiplexer) with an on-chip integrated clock extraction circuit which possesses the function of automatic phase alignment (APA), has been designed and fabricated in SMIC's 0.18 μm CMOS technology. The chip area is 670 × 780 μm2. At a single supply voltage of 1.8 V, the total power consumption is 112 mW with an input sensitivity of less than 50 mV and an output single-ended swing of above 300 mV. The measurement results show that the IC can work reliably at any input data rate between 1.8 and 2.6 Gb/s with no need for external components, reference clock, or phase alignment between data and clock. It can be used in a parallel optic-fiber data interconnecting system.

  6. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  7. Advanced Infrared Photodetectors (Materials Review)

    DTIC Science & Technology

    1993-12-01

    Telluride DMS Dilute Magnetic Semiconductor R)V Field of View FPP Focal Plane Processing IR Infrared LPE Liquid Phase Epitaxy LWIR Long Wave Infrared...operation is normal. Photoconductive (PC) cadmium mercury telluride (CdxHgl-xTe. x - 0.167) has a LWIR cutoff at room temperature; however, operation is...reliability, lightweight On-chip clocks and bias circuits An initial use of FPP is nonuniformity correction (NUC) since spatial response nonuniformity is

  8. Remote monitoring and fault recovery for FPGA-based field controllers of telescope and instruments

    NASA Astrophysics Data System (ADS)

    Zhu, Yuhua; Zhu, Dan; Wang, Jianing

    2012-09-01

    As the increasing size and more and more functions, modern telescopes have widely used the control architecture, i.e. central control unit plus field controller. FPGA-based field controller has the advantages of field programmable, which provide a great convenience for modifying software and hardware of control system. It also gives a good platform for implementation of the new control scheme. Because of multi-controlled nodes and poor working environment in scattered locations, reliability and stability of the field controller should be fully concerned. This paper mainly describes how we use the FPGA-based field controller and Ethernet remote to construct monitoring system with multi-nodes. When failure appearing, the new FPGA chip does self-recovery first in accordance with prerecovery strategies. In case of accident, remote reconstruction for the field controller can be done through network intervention if the chip is not being restored. This paper also introduces the network remote reconstruction solutions of controller, the system structure and transport protocol as well as the implementation methods. The idea of hardware and software design is given based on the FPGA. After actual operation on the large telescopes, desired results have been achieved. The improvement increases system reliability and reduces workload of maintenance, showing good application and popularization.

  9. Impulse radio ultra wideband wireless transmission of dopamine concentration levels recorded by fast-scan cyclic voltammetry.

    PubMed

    Ebrazeh, Ali; Bozorgzadeh, Bardia; Mohseni, Pedram

    2015-01-01

    This paper demonstrates the feasibility of utilizing impulse radio ultra wideband (IR-UWB) signaling technique for reliable, wireless transmission of dopamine concentration levels recorded by fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode (CFM) to address the problem of elevated data rates in high-channel-count neurochemical monitoring. Utilizing an FSCV-sensing chip fabricated in AMS 0.35μm 2P/4M CMOS, a 3-5-GHz, IR-UWB transceiver (TRX) chip fabricated in TSMC 90nm 1P/9M RF CMOS, and two off-chip, miniature, UWB antennae, wireless transfer of pseudo-random binary sequence (PRBS) data at 50Mbps over a distance of <;1m is first shown with bit-error rates (BER) <; 10(-3). Further, IR-UWB wireless transmission of dopamine concentration levels prerecorded with FSCV at a CFM during flow injection analysis (FIA) is also demonstrated with transmitter (TX) power dissipation of only ~4.4μW from 1.2V, representing two orders of magnitude reduction in TX power consumption compared to that of a conventional frequency-shift-keyed (FSK) link operating at ~433MHz.

  10. Microcontact Printing of Thiol-Functionalized Ionic Liquid Microarrays for "Membrane-less" and "Spill-less" Gas Sensors.

    PubMed

    Gondosiswanto, Richard; Gunawan, Christian A; Hibbert, David B; Harper, Jason B; Zhao, Chuan

    2016-11-16

    Lab-on-a-chip systems have gained significant interest for both chemical synthesis and assays at the micro-to-nanoscale with a unique set of benefits. However, solvent volatility represents one of the major hurdles to the reliability and reproducibility of the lab-on-a-chip devices for large-scale applications. Here we demonstrate a strategy of combining nonvolatile and functionalized ionic liquids with microcontact printing for fabrication of "wall-less" microreactors and microfluidics with high reproducibility and high throughput. A range of thiol-functionalized ionic liquids have been synthesized and used as inks for microcontact printing of ionic liquid microdroplet arrays onto gold chips. The covalent bonds formed between the thiol-functionalized ionic liquids and the gold substrate offer enhanced stability of the ionic liquid microdroplets, compared to conventional nonfunctionalized ionic liquids, and these microdroplets remain stable in a range of nonpolar and polar solvents, including water. We further demonstrate the use of these open ionic liquid microarrays for fabrication of "membrane-less" and "spill-less" gas sensors with enhanced reproducibility and robustness. Ionic-liquid-based microarray and microfluidics fabricated using the described microcontact printing may provide a versatile platform for a diverse number of applications at scale.

  11. Design of automatic curtain controlled by wireless based on single chip 51 microcomputer

    NASA Astrophysics Data System (ADS)

    Han, Dafeng; Chen, Xiaoning

    2017-08-01

    In order to realize the wireless control of the domestic intelligent curtains, a set of wireless intelligent curtain control system based on 51 single chip microcomputer have been designed in this paper. The intelligent curtain can work in the manual mode, automatic mode and sleep mode and can be carried out by the button and mobile phone APP mode loop switch. Through the photosensitive resistance module and human pyroelectric infrared sensor to collect the indoor light value and the data whether there is the person in the room, and then after single chip processing, the motor drive module is controlled to realize the positive inversion of the asynchronous motor, the intelligent opening and closing of the curtain have been realized. The operation of the motor can be stopped under the action of the switch and the curtain opening and closing and timing switch can be controlled through the keys and mobile phone APP. The optical fiber intensity, working mode, curtain state and system time are displayed by LCD1602. The system has a high reliability and security under practical testing and with the popularity and development of smart home, the design has broad market prospects.

  12. Efficient fiber-coupled single-photon source based on quantum dots in a photonic-crystal waveguide

    PubMed Central

    DAVEAU, RAPHAËL S.; BALRAM, KRISHNA C.; PREGNOLATO, TOMMASO; LIU, JIN; LEE, EUN H.; SONG, JIN D.; VERMA, VARUN; MIRIN, RICHARD; NAM, SAE WOO; MIDOLO, LEONARDO; STOBBE, SØREN; SRINIVASAN, KARTIK; LODAHL, PETER

    2017-01-01

    Many photonic quantum information processing applications would benefit from a high brightness, fiber-coupled source of triggered single photons. Here, we present a fiber-coupled photonic-crystal waveguide single-photon source relying on evanescent coupling of the light field from a tapered out-coupler to an optical fiber. A two-step approach is taken where the performance of the tapered out-coupler is recorded first on an independent device containing an on-chip reflector. Reflection measurements establish that the chip-to-fiber coupling efficiency exceeds 80 %. The detailed characterization of a high-efficiency photonic-crystal waveguide extended with a tapered out-coupling section is then performed. The corresponding overall single-photon source efficiency is 10.9 % ± 2.3 %, which quantifies the success probability to prepare an exciton in the quantum dot, couple it out as a photon in the waveguide, and subsequently transfer it to the fiber. The applied out-coupling method is robust, stable over time, and broadband over several tens of nanometers, which makes it a highly promising pathway to increase the efficiency and reliability of planar chip-based single-photon sources. PMID:28584859

  13. Failure Analysis Study and Long-Term Reliability of Optical Assemblies with End-Face Damage

    NASA Technical Reports Server (NTRS)

    Kichak, Robert A.; Ott, Melanie N.; Leidecker, Henning W.; Chuska, Richard F.; Greenwell, Christopher J.

    2008-01-01

    In June 2005, the NESC received a multi-faceted request to determine the long term reliability of fiber optic termini on the ISS that exhibited flaws not manufactured to best workmanship practices. There was a lack of data related to fiber optic workmanship as it affects the long term reliability of optical fiber assemblies in a harsh environment. A fiber optic defect analysis was requested which would find and/or create various types of chips, spalls, scratches, etc., that were identified by the ISS personnel. Once the defects and causes were identified the next step would be to perform long term reliability testing of similar assemblies with similar defects. The goal of the defect analysis would be for the defects to be observed and documented for deterioration of fiber optic performance. Though this report mostly discusses what has been determined as evidence of poor manufacturing processes, it also concludes the majority of the damage could have been avoided with a rigorous process in place.

  14. Thermal Cycle Reliability and Failure Mechanisms of CCGA and PBGA Assemblies with and without Corner Staking

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2008-01-01

    Area array packages (AAPs) with 1.27 mm pitch have been the packages of choice for commercial applications; they are now starting to be implemented for use in military and aerospace applications. Thermal cycling characteristics of plastic ball grid array (PBGA) and chip scale package assemblies, because of their wide usage for commercial applications, have been extensively reported on in literature. Thermal cycling represents the on-off environmental condition for most electronic products and therefore is a key factor that defines reliability.However, very limited data is available for thermal cycling behavior of ceramic packages commonly used for the aerospace applications. For high reliability applications, numerous AAPs are available with an identical design pattern both in ceramic and plastic packages. This paper compares assembly reliability of ceramic and plastic packages with the identical inputs/outputs(I/Os) and pattern. The ceramic package was in the form of ceramic column grid array (CCGA) with 560 I/Os peripheral array with the identical pad design as its plastic counterpart.

  15. On-chip photonic microsystem for optical signal processing based on silicon and silicon nitride platforms

    NASA Astrophysics Data System (ADS)

    Li, Yu; Li, Jiachen; Yu, Hongchen; Yu, Hai; Chen, Hongwei; Yang, Sigang; Chen, Minghua

    2018-04-01

    The explosive growth of data centers, cloud computing and various smart devices is limited by the current state of microelectronics, both in terms of speed and heat generation. Benefiting from the large bandwidth, promising low power consumption and passive calculation capability, experts believe that the integrated photonics-based signal processing and transmission technologies can break the bottleneck of microelectronics technology. In recent years, integrated photonics has become increasingly reliable and access to the advanced fabrication process has been offered by various foundries. In this paper, we review our recent works on the integrated optical signal processing system. We study three different kinds of on-chip signal processors and use these devices to build microsystems for the fields of microwave photonics, optical communications and spectrum sensing. The microwave photonics front receiver was demonstrated with a signal processing range of a full-band (L-band to W-band). A fully integrated microwave photonics transceiver without the on-chip laser was realized on silicon photonics covering the signal frequency of up 10 GHz. An all-optical orthogonal frequency division multiplexing (OFDM) de-multiplier was also demonstrated and used for an OFDM communication system with the rate of 64 Gbps. Finally, we show our work on the monolithic integrated spectrometer with a high resolution of about 20 pm at the central wavelength of 1550 nm. These proposed on-chip signal processing systems potential applications in the fields of radar, 5G wireless communication, wearable devices and optical access networks.

  16. Microfluidic Biosensing Systems Using Magnetic Nanoparticles

    PubMed Central

    Giouroudi, Ioanna; Keplinger, Franz

    2013-01-01

    In recent years, there has been rapidly growing interest in developing hand held, sensitive and cost-effective on-chip biosensing systems that directly translate the presence of certain bioanalytes (e.g., biomolecules, cells and viruses) into an electronic signal. The impressive and rapid progress in micro- and nanotechnology as well as in biotechnology enables the integration of a variety of analytical functions in a single chip. All necessary sample handling and analysis steps are then performed within the chip. Microfluidic systems for biomedical analysis usually consist of a set of units, which guarantees the manipulation, detection and recognition of bioanalytes in a reliable and flexible manner. Additionally, the use of magnetic fields for performing the aforementioned tasks has been steadily gaining interest. This is because magnetic fields can be well tuned and applied either externally or from a directly integrated solution in the biosensing system. In combination with these applied magnetic fields, magnetic nanoparticles are utilized. Some of the merits of magnetic nanoparticles are the possibility of manipulating them inside microfluidic channels by utilizing high gradient magnetic fields, their detection by integrated magnetic microsensors, and their flexibility due to functionalization by means of surface modification and specific binding. Their multi-functionality is what makes them ideal candidates as the active component in miniaturized on-chip biosensing systems. In this review, focus will be given to the type of biosening systems that use microfluidics in combination with magnetoresistive sensors and detect the presence of bioanalyte tagged with magnetic nanoparticles. PMID:24022689

  17. Three-dimensional fit-to-flow microfluidic assembly.

    PubMed

    Chen, Arnold; Pan, Tingrui

    2011-12-01

    Three-dimensional microfluidics holds great promise for large-scale integration of versatile, digitalized, and multitasking fluidic manipulations for biological and clinical applications. Successful translation of microfluidic toolsets to these purposes faces persistent technical challenges, such as reliable system-level packaging, device assembly and alignment, and world-to-chip interface. In this paper, we extended our previously established fit-to-flow (F2F) world-to-chip interconnection scheme to a complete system-level assembly strategy that addresses the three-dimensional microfluidic integration on demand. The modular F2F assembly consists of an interfacial chip, pluggable alignment modules, and multiple monolithic layers of microfluidic channels, through which convoluted three-dimensional microfluidic networks can be easily assembled and readily sealed with the capability of reconfigurable fluid flow. The monolithic laser-micromachining process simplifies and standardizes the fabrication of single-layer pluggable polymeric modules, which can be mass-produced as the renowned Lego(®) building blocks. In addition, interlocking features are implemented between the plug-and-play microfluidic chips and the complementary alignment modules through the F2F assembly, resulting in facile and secure alignment with average misalignment of 45 μm. Importantly, the 3D multilayer microfluidic assembly has a comparable sealing performance as the conventional single-layer devices, providing an average leakage pressure of 38.47 kPa. The modular reconfigurability of the system-level reversible packaging concept has been demonstrated by re-routing microfluidic flows through interchangeable modular microchannel layers.

  18. Accuracy of genomic predictions in Gyr (Bos indicus) dairy cattle.

    PubMed

    Boison, S A; Utsunomiya, A T H; Santos, D J A; Neves, H H R; Carvalheiro, R; Mészáros, G; Utsunomiya, Y T; do Carmo, A S; Verneque, R S; Machado, M A; Panetto, J C C; Garcia, J F; Sölkner, J; da Silva, M V G B

    2017-07-01

    Genomic selection may accelerate genetic progress in breeding programs of indicine breeds when compared with traditional selection methods. We present results of genomic predictions in Gyr (Bos indicus) dairy cattle of Brazil for milk yield (MY), fat yield (FY), protein yield (PY), and age at first calving using information from bulls and cows. Four different single nucleotide polymorphism (SNP) chips were studied. Additionally, the effect of the use of imputed data on genomic prediction accuracy was studied. A total of 474 bulls and 1,688 cows were genotyped with the Illumina BovineHD (HD; San Diego, CA) and BovineSNP50 (50K) chip, respectively. Genotypes of cows were imputed to HD using FImpute v2.2. After quality check of data, 496,606 markers remained. The HD markers present on the GeneSeek SGGP-20Ki (15,727; Lincoln, NE), 50K (22,152), and GeneSeek GGP-75Ki (65,018) were subset and used to assess the effect of lower SNP density on accuracy of prediction. Deregressed breeding values were used as pseudophenotypes for model training. Data were split into reference and validation to mimic a forward prediction scheme. The reference population consisted of animals whose birth year was ≤2004 and consisted of either only bulls (TR1) or a combination of bulls and dams (TR2), whereas the validation set consisted of younger bulls (born after 2004). Genomic BLUP was used to estimate genomic breeding values (GEBV) and reliability of GEBV (R 2 PEV ) was based on the prediction error variance approach. Reliability of GEBV ranged from ∼0.46 (FY and PY) to 0.56 (MY) with TR1 and from 0.51 (PY) to 0.65 (MY) with TR2. When averaged across all traits, R 2 PEV were substantially higher (R 2 PEV of TR1 = 0.50 and TR2 = 0.57) compared with reliabilities of parent averages (0.35) computed from pedigree data and based on diagonals of the coefficient matrix (prediction error variance approach). Reliability was similar for all the 4 marker panels using either TR1 or TR2, except that imputed HD cow data set led to an inflation of reliability. Reliability of GEBV could be increased by enlarging the limited bull reference population with cow information. A reduced panel of ∼15K markers resulted in reliabilities similar to using HD markers. Reliability of GEBV could be increased by enlarging the limited bull reference population with cow information. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. The Experimental Mathematician: The Pleasure of Discovery and the Role of Proof

    ERIC Educational Resources Information Center

    Borwein, Jonathan M.

    2005-01-01

    The emergence of powerful mathematical computing environments, the growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow for mathematicians, students and teachers, to proceed heuristically and "quasi-inductively." We may increasingly use symbolic and numeric computation,…

  20. Experience with a UNIX based batch computing facility for H1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.

    1994-12-31

    A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.

  1. Model Checking, Abstraction, and Compositional Verification

    DTIC Science & Technology

    1993-07-01

    the ( alois connections used by Bensalrnu el al. [6], and also has some relation to Kurshan’s automata homonuor- phisms [62]. (Actually. we can impose a...multiprocessor simulation model. ACM Transactions on Computer Systems, 4(4):273-298, November 1986. [41 D. L. Beatty, R. E. Bryant, and C.-J. Seger

  2. Developing Software to Use Parallel Processing Effectively

    DTIC Science & Technology

    1988-10-01

    Experience, Vol 15(6), June 1985, p53 Gajski85 Gajski , Daniel D. and Jih-Kwon Peir, "Essential Issues in Multiprocessor Systems", IEEE Computer, June...Treleaven (eds.), Springer-Verlag, pp. 213-225 (June 1987). Kuck83 David Kuck, Duncan Lawrie, Ron Cytron, Ahmed Sameh and Daniel Gajski , The Architecture and

  3. Expert Systems on Multiprocessor Architectures. Volume 2. Technical Reports

    DTIC Science & Technology

    1991-06-01

    Report RC 12936 (#58037). IBM T. J. Wartson Reiearch Center. July 1987. � Alan Jay Smith. Cache memories. Coniputing Sitrry., 1.1(3): I.3-5:30...basic-shared is an instrument for ashared memory design. The components panels are processor- qload-scrolling-bar-panel, memory-qload-scrolling-bar-panel

  4. Reliable bonding using indium-based solders

    NASA Astrophysics Data System (ADS)

    Cheong, Jongpil; Goyal, Abhijat; Tadigadapa, Srinivas; Rahn, Christopher

    2004-01-01

    Low temperature bonding techniques with high bond strengths and reliability are required for the fabrication and packaging of MEMS devices. Indium and indium-tin based bonding processes are explored for the fabrication of a flextensional MEMS actuator, which requires the integration of lead zirconate titanate (PZT) substrate with a silicon micromachined structure at low temperatures. The developed technique can be used either for wafer or chip level bonding. The lithographic steps used for the patterning and delineation of the seed layer limit the resolution of this technique. Using this technique, reliable bonds were achieved at a temperature of 200°C. The bonds yielded an average tensile strength of 5.41 MPa and 7.38 MPa for samples using indium and indium-tin alloy solders as the intermediate bonding layers respectively. The bonds (with line width of 100 microns) showed hermetic sealing capability of better than 10-11 mbar-l/s when tested using a commercial helium leak tester.

  5. Reliable bonding using indium-based solders

    NASA Astrophysics Data System (ADS)

    Cheong, Jongpil; Goyal, Abhijat; Tadigadapa, Srinivas; Rahn, Christopher

    2003-12-01

    Low temperature bonding techniques with high bond strengths and reliability are required for the fabrication and packaging of MEMS devices. Indium and indium-tin based bonding processes are explored for the fabrication of a flextensional MEMS actuator, which requires the integration of lead zirconate titanate (PZT) substrate with a silicon micromachined structure at low temperatures. The developed technique can be used either for wafer or chip level bonding. The lithographic steps used for the patterning and delineation of the seed layer limit the resolution of this technique. Using this technique, reliable bonds were achieved at a temperature of 200°C. The bonds yielded an average tensile strength of 5.41 MPa and 7.38 MPa for samples using indium and indium-tin alloy solders as the intermediate bonding layers respectively. The bonds (with line width of 100 microns) showed hermetic sealing capability of better than 10-11 mbar-l/s when tested using a commercial helium leak tester.

  6. Tunable nanoblock lasers and stretching sensors.

    PubMed

    Lu, T W; Wang, C; Hsiao, C F; Lee, P T

    2016-09-22

    Reconfigurable, reliable, and robust nanolasers with wavelengths tunable in the telecommunication bands are currently being sought after for use as flexible light sources in photonic integrated circuits. Here, we propose and demonstrate tunable nanolasers based on 1D nanoblocks embedded within stretchable polydimethylsiloxane. Our lasers show a large wavelength tunability of 7.65 nm per 1% elongation. Moreover, this tunability is reconfigurable and reliable under repeated stretching/relaxation tests. By applying excessive stretching, wide wavelength tuning over a range of 80 nm (spanning the S, C, and L telecommunication bands) is successfully demonstrated. Furthermore, as a stretching sensor, an enhanced wavelength response to elongation of 9.9 nm per % is obtained via the signal differential from two nanoblock lasers positioned perpendicular to each other. The minimum detectable elongation is as small as 0.056%. Nanoblock lasers can function as reliable tunable light sources in telecommunications and highly sensitive on-chip structural deformation sensors.

  7. Laser as a Tool to Study Radiation Effects in CMOS

    NASA Astrophysics Data System (ADS)

    Ajdari, Bahar

    Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.

  8. Nanofiber Anisotropic Conductive Films (ACF) for Ultra-Fine-Pitch Chip-on-Glass (COG) Interconnections

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Hoon; Kim, Tae-Wan; Suk, Kyung-Lim; Paik, Kyung-Wook

    2015-11-01

    Nanofiber anisotropic conductive films (ACF) were invented, by adapting nanofiber technology to ACF materials, to overcome the limitations of ultra-fine-pitch interconnection packaging, i.e. shorts and open circuits as a result of the narrow space between bumps and electrodes. For nanofiber ACF, poly(vinylidene fluoride) (PVDF) and poly(butylene succinate) (PBS) polymers were used as nanofiber polymer materials. For PVDF and PBS nanofiber ACF, conductive particles of diameter 3.5 μm were incorporated into nanofibers by electrospinning. In ultra-fine-pitch chip-on-glass assembly, insulation was significantly improved by using nanofiber ACF, because nanofibers inside the ACF suppressed the mobility of conductive particles, preventing them from flowing out during the bonding process. Capture of conductive particles was increased from 31% (conventional ACF) to 65%, and stable electrical properties and reliability were achieved by use of nanofiber ACF.

  9. Self-Patterning of Silica/Epoxy Nanocomposite Underfill by Tailored Hydrophilic-Superhydrophobic Surfaces for 3D Integrated Circuit (IC) Stacking.

    PubMed

    Tuan, Chia-Chi; James, Nathan Pataki; Lin, Ziyin; Chen, Yun; Liu, Yan; Moon, Kyoung-Sik; Li, Zhuo; Wong, C P

    2017-03-15

    As microelectronics are trending toward smaller packages and integrated circuit (IC) stacks nowadays, underfill, the polymer composite filled in between the IC chip and the substrate, becomes increasingly important for interconnection reliability. However, traditional underfills cannot meet the requirements for low-profile and fine pitch in high density IC stacking packages. Post-applied underfills have difficulties in flowing into the small gaps between the chip and the substrate, while pre-applied underfills face filler entrapment at bond pads. In this report, we present a self-patterning underfilling technology that uses selective wetting of underfill on Cu bond pads and Si 3 N 4 passivation via surface energy engineering. This novel process, fully compatible with the conventional underfilling process, eliminates the issue of filler entrapment in typical pre-applied underfilling process, enabling high density and fine pitch IC die bonding.

  10. MEAs and 3D nanoelectrodes: electrodeposition as tool for a precisely controlled nanofabrication.

    PubMed

    Weidlich, Sabrina; Krause, Kay J; Schnitker, Jan; Wolfrum, Bernhard; Offenhäusser, Andreas

    2017-01-31

    Microelectrode arrays (MEAs) are gaining increasing importance for the investigation of signaling processes between electrogenic cells. However, efficient cell-chip coupling for robust and long-term electrophysiological recording and stimulation still remains a challenge. A possible approach for the improvement of the cell-electrode contact is the utilization of three-dimensional structures. In recent years, various 3D electrode geometries have been developed, but we are still lacking a fabrication approach that enables the formation of different 3D structures on a single chip in a controlled manner. This, however, is needed to enable a direct and reliable comparison of the recording capabilities of the different structures. Here, we present a method for a precisely controlled deposition of nanoelectrodes, enabling the fabrication of multiple, well-defined types of structures on our 64 electrode MEAs towards a rapid-prototyping approach to 3D electrodes.

  11. A novel simple external fixation for securing silicone stent in patients with upper tracheal stenosis

    PubMed Central

    Lin, Xiaoxiao; Ye, Min; Li, Yuping

    2018-01-01

    Upper tracheal stenosis is considered as a potentially life-threatening condition. Silicone stenting is an attractive treatment option for patients with upper tracheal stenosis. However, its use has been compromised by a major complication, stent migration. In the report, we introduced a novel external fixation of silicone stent which only needed one puncture site and involved a silicon chip as an anchoring device. All equipment and materials including the silicon chip were available in routine bronchoscopy suite. The method had been successfully performed in three patients with upper tracheal stenosis at our institution. And the patients were monitored for over 20 months after the intervention, and no spontaneous stent migration occurred. Therefore, we believe this is a simple and reliable approach for improving the outcome of silicone stenting in patients with upper tracheal stenosis and should be introduced in clinical practice.

  12. Fast single run of vanilla fingerprint markers on microfluidic-electrochemistry chip for confirmation of common frauds.

    PubMed

    Avila, Mónica; Zougagh, Mohammed; Escarpa, Alberto; Ríos, Angel

    2009-10-01

    A new strategy based on the fast separation of the fingerprint markers of Vanilla planifolia extracts and vanilla-related samples on microfluidic-electrochemistry chip is proposed. This methodology allowed the detection of all required markers for confirmation of common frauds in this field. The elution order was strategically connected with sequential sample screening and analyte confirmation steps, where first ethyl vanillin was detected to distinguish natural from adultered samples; second, vanillin as prominent marker in V. planifolia, but frequently added in its synthetic form; and third, the final detection of the fingerprint markers (p-hydroxybenzaldehyde, vanillic acid, and p-hydroxybenzoic acid) of V. planifolia with confirmation purposes. The reliability of the proposed methodology was demonstrated in the confirmation the natural or non-natural origin of vanilla in samples using V. planifolia extracts and other selected food samples containing this flavor.

  13. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    PubMed

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  14. Vacuum-assisted cell loading enables shear-free mammalian microfluidic culture

    PubMed Central

    Kolnik, Martin; Tsimring, Lev S; Hasty, Je

    2012-01-01

    Microfluidic perfusion cultures for mammalian cells provide a novel means for probing single-cell behavior but require the management of culture parameters such as flow-induced shear stress. Methods to eliminate shear stress generally focus on capturing cells in regions with high resistance to fluid flow. Here, we present a novel trapping design to easily and reliably load a high density of cells into culture chambers that are extremely isolated from potentially damaging flow effects. We utilize a transient on-chip vacuum to remove air from the culture chambers and rapidly replace the volume with a liquid cell suspension. We demonstrate the ability of this simple and robust method to load and culture three commonly used cell lines. We show how the incorporation of an on-chip function generator can be used for dynamic stimulation of cells during long-term continuous perfusion culture. PMID:22961584

  15. [Lab-on-a-chip systems in the point-of-care diagnostics].

    PubMed

    Szabó, Barnabás; Borbíró, András; Fürjes, Péter

    2015-12-27

    The need in modern medicine for near-patient diagnostics being able to accelerate therapeutic decisions and possibly replacing laboratory measurements is significantly growing. Reliable and cost-effective bioanalytical measurement systems are required which - acting as a micro-laboratory - contain integrated biomolecular recognition, sensing, signal processing and complex microfluidic sample preparation modules. These micro- and nanofabricated Lab-on-a-chip systems open new perspectives in the diagnostic supply chain, since they are able even for quantitative, high-precision and immediate analysis of special disease specific molecular markers or their combinations from a single drop of sample. Accordingly, crucial requirements regarding the instruments and the analytical methods are the high selectivity, extremely low detection limit, short response time and integrability into the healthcare information networks. All these features can make the hierarchical examination chain shorten, and revolutionize laboratory diagnostics, evolving a brand new situation in therapeutic intervention.

  16. Thermal fatigue life evaluation of SnAgCu solder joints in a multi-chip power module

    NASA Astrophysics Data System (ADS)

    Barbagallo, C.; Malgioglio, G. L.; Petrone, G.; Cammarata, G.

    2017-05-01

    For power devices, the reliability of thermal fatigue induced by thermal cycling has been prioritized as an important concern. The main target of this work is to apply a numerical procedure to assess the fatigue life for lead-free solder joints, that represent, in general, the weakest part of the electronic modules. Starting from a real multi-chip power module, FE-based models were built-up by considering different conditions in model implementation in order to simulate, from one hand, the worst working condition for the module and, from another one, the module standing into a climatic test room performing thermal cycles. Simulations were carried-out both in steady and transient conditions in order to estimate the module thermal maps, the stress-strain distributions, the effective plastic strain distributions and finally to assess the number of cycles to failure of the constitutive solder layers.

  17. A multiplexed chip-based assay system for investigating the functional development of human skeletal myotubes in vitro.

    PubMed

    Smith, A S T; Long, C J; Pirozzi, K; Najjar, S; McAleer, C; Vandenburgh, H H; Hickman, J J

    2014-09-20

    This report details the development of a non-invasive in vitro assay system for investigating the functional maturation and performance of human skeletal myotubes. Data is presented demonstrating the survival and differentiation of human myotubes on microscale silicon cantilevers in a defined, serum-free system. These cultures can be stimulated electrically and the resulting contraction quantified using modified atomic force microscopy technology. This system provides a higher degree of sensitivity for investigating contractile waveforms than video-based analysis, and represents the first system capable of measuring the contractile activity of individual human muscle myotubes in a reliable, high-throughput and non-invasive manner. The development of such a technique is critical for the advancement of body-on-a-chip platforms toward application in pre-clinical drug development screens. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Quantitative trait loci markers derived from whole genome sequence data increases the reliability of genomic prediction.

    PubMed

    Brøndum, R F; Su, G; Janss, L; Sahana, G; Guldbrandtsen, B; Boichard, D; Lund, M S

    2015-06-01

    This study investigated the effect on the reliability of genomic prediction when a small number of significant variants from single marker analysis based on whole genome sequence data were added to the regular 54k single nucleotide polymorphism (SNP) array data. The extra markers were selected with the aim of augmenting the custom low-density Illumina BovineLD SNP chip (San Diego, CA) used in the Nordic countries. The single-marker analysis was done breed-wise on all 16 index traits included in the breeding goals for Nordic Holstein, Danish Jersey, and Nordic Red cattle plus the total merit index itself. Depending on the trait's economic weight, 15, 10, or 5 quantitative trait loci (QTL) were selected per trait per breed and 3 to 5 markers were selected to tag each QTL. After removing duplicate markers (same marker selected for more than one trait or breed) and filtering for high pairwise linkage disequilibrium and assaying performance on the array, a total of 1,623 QTL markers were selected for inclusion on the custom chip. Genomic prediction analyses were performed for Nordic and French Holstein and Nordic Red animals using either a genomic BLUP or a Bayesian variable selection model. When using the genomic BLUP model including the QTL markers in the analysis, reliability was increased by up to 4 percentage points for production traits in Nordic Holstein animals, up to 3 percentage points for Nordic Reds, and up to 5 percentage points for French Holstein. Smaller gains of up to 1 percentage point was observed for mastitis, but only a 0.5 percentage point increase was seen for fertility. When using a Bayesian model accuracies were generally higher with only 54k data compared with the genomic BLUP approach, but increases in reliability were relatively smaller when QTL markers were included. Results from this study indicate that the reliability of genomic prediction can be increased by including markers significant in genome-wide association studies on whole genome sequence data alongside the 54k SNP set. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. A simple microfluidic platform for rapid and efficient production of the radiotracer [18F]fallypride.

    PubMed

    Zhang, Xin; Liu, Fei; Knapp, Karla-Anne; Nickels, Michael L; Manning, H Charles; Bellan, Leon M

    2018-05-01

    Herein, we report the development of a simple, high-throughput and efficient microfluidic system for synthesizing radioactive [18F]fallypride, a PET imaging radiotracer widely used in medical research. The microfluidic chip contains all essential modules required for the synthesis and purification of radioactive fallypride. The radiochemical yield of the tracer is sufficient for multiple animal injections for preclinical imaging studies. To produce the on-chip concentration and purification columns, we employ a simple "trapping" mechanism by inserting rows of square pillars with predefined gaps near the outlet of microchannel. Microspheres with appropriate functionality are suspended in solution and loaded into the microchannels to form columns for radioactivity concentration and product purification. Instead of relying on complicated flow control elements (e.g., micromechanical valves requiring complex external pneumatic actuation), external valves are utilized to control transfer of the reagents between different modules. The on-chip ion exchange column can efficiently capture [18F]fluoride with negligible loss (∼98% trapping efficiency), and subsequently release a burst of concentrated [18F]fluoride to the reaction cavity. A thin layer of PDMS with a small hole in the center facilitates rapid and reliable water evaporation (with the aid of azeotropic distillation and nitrogen flow) while reducing fluoride loss. During the solvent exchange and fluorination reaction, the entire chip is uniformly heated to the desired temperature using a hot plate. All aspects of the [18F]fallypride synthesis were monitored by high-performance liquid chromatography (HPLC) analysis, resulting in labelling efficiency in fluorination reaction ranging from 67-87% (n = 5). Moreover, after isolating unreacted [18F]fluoride, remaining fallypride precursor, and various by-products via an on-chip purification column, the eluted [18F]fallypride is radiochemically pure and of a sufficient quantity to allow for PET imaging (∼5 mCi). Finally, a positron emission tomography (PET) image of a rat brain injected with ∼300 μCi [18F]fallypride produced by our microfluidic chip is provided, demonstrating the utility of the product produced by the microfluidic reactor. With a short synthesis time (∼60 min) and a highly integrated on-chip modular configuration that allows for concentration, reaction, and product purification, our microfluidic chip offers numerous exciting advantages with the potential for applications in radiochemical research and clinical production. Moreover, due to its simplicity and potential for automation, we anticipate it may be easily integrated into a clinical environment.

  20. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol outperformed the wired counterparts and several other wireless architectures proposed in literature in terms of bandwidth and packet energy dissipation. Significant gains were observed in packet energy dissipation and bandwidth even with scaling the system to higher number of cores. Non-uniform traffic simulations showed that the proposed CDMA-WiNoC was consistent in bandwidth across all traffic patterns. It is also shown that the CDMA based MAC scheme does not introduce additional reliability concerns in data transfer over the on-chip wireless interconnects.

Top