The 1991 3rd NASA Symposium on VLSI Design
NASA Technical Reports Server (NTRS)
Maki, Gary K.
1991-01-01
Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2.
NASA Technical Reports Server (NTRS)
Windley, P. J.
1991-01-01
In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.
HDL to verification logic translator
NASA Technical Reports Server (NTRS)
Gambles, J. W.; Windley, P. J.
1992-01-01
The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
CMOS VLSI Layout and Verification of a SIMD Computer
NASA Technical Reports Server (NTRS)
Zheng, Jianqing
1996-01-01
A CMOS VLSI layout and verification of a 3 x 3 processor parallel computer has been completed. The layout was done using the MAGIC tool and the verification using HSPICE. Suggestions for expanding the computer into a million processor network are presented. Many problems that might be encountered when implementing a massively parallel computer are discussed.
Submicron Systems Architecture Project
1981-11-01
This project is concerned with the architecture , design , and testing of VLSI Systems. The principal activities in this report period include: The Tree Machine; COPE, The Homogeneous Machine; Computational Arrays; Switch-Level Model for MOS Logic Design; Testing; Local Network and Designer Workstations; Self-timed Systems; Characterization of Deadlock Free Resource Contention; Concurrency Algebra; Language Design and Logic for Program Verification.
An Integrated Unix-based CAD System for the Design and Testing of Custom VLSI Chips
NASA Technical Reports Server (NTRS)
Deutsch, L. J.
1985-01-01
A computer aided design (CAD) system that is being used at the Jet Propulsion Laboratory for the design of custom and semicustom very large scale integrated (VLSI) chips is described. The system consists of a Digital Equipment Corporation VAX computer with the UNIX operating system and a collection of software tools for the layout, simulation, and verification of microcircuits. Most of these tools were written by the academic community and are, therefore, available to JPL at little or no cost. Some small pieces of software have been written in-house in order to make all the tools interact with each other with a minimal amount of effort on the part of the designer.
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits
NASA Astrophysics Data System (ADS)
Corliss, Walter F., II
1989-03-01
The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
Formal Multilevel Hierarchical Verification of Synchronous MOS VLSI Circuits.
1987-06-01
166 12.4 Capacitance Coupling............................. 166 12.5 Multiple Abstraction Fuctions ....................... 168...depend on whether it is performing flat verification or hierarchical verification. The primary operations of Silica Pithecus when performing flat...signals never arise. The primary operation of Silica Pithecus when performing hierarchical verification is processing constraints to show they hold
The Fifth NASA Symposium on VLSI Design
NASA Technical Reports Server (NTRS)
1993-01-01
The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design.
Parallel processing for digital picture comparison
NASA Technical Reports Server (NTRS)
Cheng, H. D.; Kou, L. T.
1987-01-01
In picture processing an important problem is to identify two digital pictures of the same scene taken under different lighting conditions. This kind of problem can be found in remote sensing, satellite signal processing and the related areas. The identification can be done by transforming the gray levels so that the gray level histograms of the two pictures are closely matched. The transformation problem can be solved by using the packing method. Researchers propose a VLSI architecture consisting of m x n processing elements with extensive parallel and pipelining computation capabilities to speed up the transformation with the time complexity 0(max(m,n)), where m and n are the numbers of the gray levels of the input picture and the reference picture respectively. If using uniprocessor and a dynamic programming algorithm, the time complexity will be 0(m(3)xn). The algorithm partition problem, as an important issue in VLSI design, is discussed. Verification of the proposed architecture is also given.
vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM
ERIC Educational Resources Information Center
Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn
2014-01-01
This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…
A single VLSI chip for computing syndromes in the (225, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
A description of a single VLSI chip for computing syndromes in the (255, 223) Reed-Solomon decoder is presented. The architecture that leads to this single VLSI chip design makes use of the dual basis multiplication algorithm. The same architecture can be applied to design VLSI chips to compute various kinds of number theoretic transforms.
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
Architecture for VLSI design of Reed-Solomon encoders
NASA Technical Reports Server (NTRS)
Liu, K. Y.
1981-01-01
The logic structure of a universal VLSI chip called the symbol-slice Reed-Solomon (RS) encoder chip is discussed. An RS encoder can be constructed by cascading and properly interconnecting a group of such VLSI chips. As a design example, it is shown that a (255,223) RD encoder requiring around 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical interconnected VLSI RS encoder chips. Besides the size advantage, the VLSI RS encoder also has the potential advantages of requiring less power and having a higher reliability.
NASA Technical Reports Server (NTRS)
Szatkowski, G. P.
1983-01-01
A computer simulation system has been developed for the Space Shuttle's advanced Centaur liquid fuel booster rocket, in order to conduct systems safety verification and flight operations training. This simulation utility is designed to analyze functional system behavior by integrating control avionics with mechanical and fluid elements, and is able to emulate any system operation, from simple relay logic to complex VLSI components, with wire-by-wire detail. A novel graphics data entry system offers a pseudo-wire wrap data base that can be easily updated. Visual subsystem operations can be selected and displayed in color on a six-monitor graphics processor. System timing and fault verification analyses are conducted by injecting component fault modes and min/max timing delays, and then observing system operation through a red line monitor.
The 1992 4th NASA SERC Symposium on VLSI Design
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R.
1992-01-01
Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design.
NASA Space Engineering Research Center Symposium on VLSI Design
NASA Technical Reports Server (NTRS)
Maki, Gary K.
1990-01-01
The NASA Space Engineering Research Center (SERC) is proud to offer, at its second symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories and the electronics industry. These featured speakers share insights into next generation advances that will serve as a basis for future VLSI design. Questions of reliability in the space environment along with new directions in CAD and design are addressed by the featured speakers.
1987-06-01
evaluation and chip layout planning for VLSI digital systems. A high-level applicative (functional) language, implemented at UCLA, allows combining of...operating system. 2.1 Introduction The complexity of VLSI requires the application of CAD tools at all levels of the design process. In order to be...effective, these tools must be adaptive to the specific design. In this project we studied a design method based on the use of applicative languages
Complex VLSI Feature Comparison for Commercial Microelectronics Verification
2014-03-27
69 4.2.4 Circuit E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3 Summary...used for high-performance consumer microelectronics. Volume is a significant factor in constraining the technology limit for defense circuits, but it...surveyed in a 2010 Department of Commerce report found counterfeit chips difficult to identify due to improved fabrication quality in overseas counterfeit
A procedural method for the efficient implementation of full-custom VLSI designs
NASA Technical Reports Server (NTRS)
Belk, P.; Hickey, N.
1987-01-01
An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.
VLSI design of a single chip reed-solomon encoder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Truong, T.K.; Deutsch, L.J.; Reed, I.S.
A design for a single chip implementation of a Reed-Solomon encoder is presented. The architecture that leads to this single VLSI chip design makes use of a bit serial finite field multiplication algorithm.
NASA Space Engineering Research Center for VLSI System Design
NASA Technical Reports Server (NTRS)
1993-01-01
This annual report outlines the activities of the past year at the NASA SERC on VLSI Design. Highlights for this year include the following: a significant breakthrough was achieved in utilizing commercial IC foundries for producing flight electronics; the first two flight qualified chips were designed, fabricated, and tested and are now being delivered into NASA flight systems; and a new technology transfer mechanism has been established to transfer VLSI advances into NASA and commercial systems.
Laser Direct Routing for High Density Interconnects
NASA Astrophysics Data System (ADS)
Moreno, Wilfrido Alejandro
The laser restructuring of electronic circuits fabricated using standard Very Large Scale Integration (VLSI) process techniques, is an excellent alternative that allows low-cost quick turnaround production with full circuit similarity between the Laser Restructured prototype and the customized product for mass production. Laser Restructurable VLSI (LRVLSI) would allow design engineers the capability to interconnect cells that implement generic logic functions and signal processing schemes to achieve a higher level of design complexity. LRVLSI of a particular circuit at the wafer or packaged chip level is accomplished using an integrated computer controlled laser system to create low electrical resistance links between conductors and to cut conductor lines. An infrastructure for rapid prototyping and quick turnaround using Laser Restructuring of VLSI circuits was developed to meet three main parallel objectives: to pursue research on novel interconnect technologies using LRVLSI, to develop the capability of operating in a quick turnaround mode, and to maintain standardization and compatibility with commercially available equipment for feasible technology transfer. The system is to possess a high degree of flexibility, high data quality, total controllability, full documentation, short downtime, a user-friendly operator interface, automation, historical record keeping, and error indication and logging. A specially designed chip "SLINKY" was used as the test vehicle for the complete characterization of the Laser Restructuring system. With the use of Design of Experiment techniques the Lateral Diffused Link (LDL), developed originally at MIT Lincoln Laboratories, was completely characterized and for the first time a set of optimum process parameters was obtained. With the designed infrastructure fully operational, the priority objective was the search for a substitute for the high resistance, high current leakage to substrate, and relatively low density Lateral Diffused Link. A high density Laser Vertical Link with resistance values below 10 ohms was developed, studied and tested using design of experiment methodologies. The vertical link offers excellent advantages in the area of quick prototyping of electronic circuits, but even more important, due to having similar characteristics to a foundry produced via, it gives quick transfer from the prototype system verification stage to the mass production stage.
On testing VLSI chips for the big Viterbi decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.
1989-01-01
A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.
The VLSI design of a single chip Reed-Solomon encoder
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.
1982-01-01
A design for a single chip implementation of a Reed-Solomon encoder is presented. The architecture that leads to this single VLSI chip design makes use of a bit serial finite field multiplication algorithm.
NASA Space Engineering Research Center for VLSI systems design
NASA Technical Reports Server (NTRS)
1991-01-01
This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.
A special purpose silicon compiler for designing supercomputing VLSI systems
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.
1991-01-01
Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, H.M.; Reed, I.S.
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less
Digital MOS integrated circuits
NASA Astrophysics Data System (ADS)
Elmasry, M. I.
MOS in digital circuit design is considered along with aspects of digital VLSI, taking into account a comparison of MOSFET logic circuits, 1-micrometer MOSFET VLSI technology, a generalized guide for MOSFET miniaturization, processing technologies, novel circuit structures for VLSI, and questions of circuit and system design for VLSI. MOS memory cells and circuits are discussed, giving attention to a survey of high-density dynamic RAM cell concepts, one-device cells for dynamic random-access memories, variable resistance polysilicon for high density CMOS Ram, high performance MOS EPROMs using a stacked-gate cell, and the optimization of the latching pulse for dynamic flip-flop sensors. Programmable logic arrays are considered along with digital signal processors, microprocessors, static RAMs, and dynamic RAMs.
The Area-Time Complexity of Sorting.
1984-12-01
suggests a classification of keys into short (k < logn), long (k > 2 logn), and of medium length. Optimal or near-optimal designs of VLSI sorters are...suggests a classification of keys into short (k 4 logn ), long (k > 21ogn ), and of medium length. Optimal or near-optimal designs of VLSI sorters are...ARCHITECTURES 79 5.1 Introduction 79 5.2 Parallel Algorithms for Sorting 80 . 5.3 Parallel Architectures 88 6 OPTIMAL VLSI SORTERS FOR KEYS OF LENGTH k - logn
Architecture for VLSI design of Reed-Solomon encoders
NASA Technical Reports Server (NTRS)
Liu, K. Y.
1982-01-01
A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.
Architecture for VLSI design of Reed-Solomon encoders
NASA Astrophysics Data System (ADS)
Liu, K. Y.
1982-02-01
A description is given of the logic structure of the universal VLSI symbol-slice Reed-Solomon (RS) encoder chip, from a group of which an RS encoder may be constructed through cascading and proper interconnection. As a design example, it is shown that an RS encoder presently requiring approximately 40 discrete CMOS ICs may be replaced by an RS encoder consisting of four identical, interconnected VLSI RS encoder chips, offering in addition to greater compactness both a lower power requirement and greater reliability.
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Mcewan, S. D.; Spry, A. J.
1985-01-01
Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.
Design of Tactile Sensor Using Dynamic Wafer Technology Based on VLSI Technique
2001-10-25
Charles Noback, Rober Carola," Human Anatomy and Physiology" third edition, 1995. [5] M.H. Raibert and John E. Tanner, "Design and Implementation of VLSI Tactile Sensing Computer" Robotics Research vol 1, 1983.
VLSI (Very Large Scale Integration) Design Tools, Reference Manual, Release 3.0.
1985-08-01
generators/mult prior to running mult. The generated layout is output in directory 1ca in caesar cells with names of the form "caesarame*oca. Mut is a cft ...vlsa) spice(1.vlsi), User’s Guide to AML VLSI Dodgen Tools Reference Manual, UW/NW VLSI Consortium, University of Washington, (Christopher Terman, MIT...of the form ’caesarname..ca. Muls is a cft -based program and therefore also produces *.bd fiIls ’Caesaramew may not begin with the string mule. The
A single chip VLSI Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.
1986-01-01
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.
Functional Abstraction from Structure in VLSI Simulation Models,
1987-05-01
wide vari- ety of powerful tools, designed around the Y model proposed by Gajski and Kuhn [11]. The heart of the system is the data representation...34Fuictional Models for VLSI Design", 20th IEEE Design Automation Conference (DAC), 1983, paper 32.2, pp. 506-514. * 21 [11] Gajski , Daniel D., Kuhn, Robert H
On a High-Performance VLSI Solution to Database Problems.
1981-08-01
offer such attractive features as automatic verification and. maintenance of semantic integrity, usage of views as abstraction and authorization...course, is the waste of too much potential resource. The global database may contain information for many different users and applications. In processing...working on, this may cause no damage at all, but some waste of space. Therefore one solution may be perhaps to do nothing to prevent its occurrence
Phase-synchroniser based on gm-C all-pass filter chain with sliding mode control
NASA Astrophysics Data System (ADS)
Mitić, Darko B.; Jovanović, Goran S.; Stojčev, Mile K.; Antić, Dragan S.
2015-03-01
Phase-synchronisers have many applications in VLSI circuit designs. They are used in CMOS RF circuits including phase (de)modulators, phase recovery circuits, multiphase synthesis, etc. In this article, a phase-synchroniser based on gm-C all-pass filter chain with sliding mode control is presented. The filter chain provides good controllable delay characteristics over the full range of phase and frequency regulation, without deterioration of input signal amplitude and waveform, while the sliding mode control enables us to achieve fast and predetermined finite locking time. IHP 0.25 µm SiGe BiCMOS technology has been used in design and verification processes. The circuit operates in the frequency range from 33 MHz up to 150 MHz. Simulation results indicate that it is possible to achieve very fast synchronisation time period, which is approximately four time intervals of the input signal during normal operation, and 20 time intervals during power-on.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, Howard M.; Reed, Irving S.
1988-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System
NASA Astrophysics Data System (ADS)
Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won
2009-12-01
As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.
Performance, Resources, and Complexity: A Systematic Approach to Microarchitectural Design
1989-05-01
Approved: ********************************** Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT VLSI design in general -- microprocessor design in particular...has been treated more like an art than a science in the past. The goal of this thesis is to explain the science of VLSI design to someone who wants
Novel Highly Parallel and Systolic Architectures Using Quantum Dot-Based Hardware
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.; Spotnitz, Matthew
1997-01-01
VLSI technology has made possible the integration of massive number of components (processors, memory, etc.) into a single chip. In VLSI design, memory and processing power are relatively cheap and the main emphasis of the design is on reducing the overall interconnection complexity since data routing costs dominate the power, time, and area required to implement a computation. Communication is costly because wires occupy the most space on a circuit and it can also degrade clock time. In fact, much of the complexity (and hence the cost) of VLSI design results from minimization of data routing. The main difficulty in VLSI routing is due to the fact that crossing of the lines carrying data, instruction, control, etc. is not possible in a plane. Thus, in order to meet this constraint, the VLSI design aims at keeping the architecture highly regular with local and short interconnection. As a result, while the high level of integration has opened the way for massively parallel computation, practical and full exploitation of such a capability in many applications of interest has been hindered by the constraints on interconnection pattern. More precisely. the use of only localized communication significantly simplifies the design of interconnection architecture but at the expense of somewhat restricted class of applications. For example, there are currently commercially available products integrating; hundreds of simple processor elements within a single chip. However, the lack of adequate interconnection pattern among these processing elements make them inefficient for exploiting a large degree of parallelism in many applications.
Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours
NASA Astrophysics Data System (ADS)
Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.
2011-09-01
Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.
Compact Interconnection Networks Based on Quantum Dots
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Nikzad; Modarress, Katayoon; Spotnitz, Matthew
2003-01-01
Architectures that would exploit the distinct characteristics of quantum-dot cellular automata (QCA) have been proposed for digital communication networks that connect advanced digital computing circuits. In comparison with networks of wires in conventional very-large-scale integrated (VLSI) circuitry, the networks according to the proposed architectures would be more compact. The proposed architectures would make it possible to implement complex interconnection schemes that are required for some advanced parallel-computing algorithms and that are difficult (and in many cases impractical) to implement in VLSI circuitry. The difficulty of implementation in VLSI and the major potential advantage afforded by QCA were described previously in Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 42. To recapitulate: Wherever two wires in a conventional VLSI circuit cross each other and are required not to be in electrical contact with each other, there must be a layer of electrical insulation between them. This, in turn, makes it necessary to resort to a noncoplanar and possibly a multilayer design, which can be complex, expensive, and even impractical. As a result, much of the cost of designing VLSI circuits is associated with minimization of data routing and assignment of layers to minimize crossing of wires. Heretofore, these considerations have impeded the development of VLSI circuitry to implement complex, advanced interconnection schemes. On the other hand, with suitable design and under suitable operating conditions, QCA-based signal paths can be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes. The proposed architectures require two advances in QCA-based circuitry beyond basic QCA-based binary-signal wires described in the cited prior article. One of these advances would be the development of QCA-based wires capable of bidirectional transmission of signals. The other advance would be the development of QCA circuits capable of high-impedance state outputs. The high-impedance states would be utilized along with the 0- and 1-state outputs of QCA.
VLSI 'smart' I/O module development
NASA Astrophysics Data System (ADS)
Kirk, Dan
The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.
A Coherent VLSI Design Environment
1987-12-31
contract the total research volume in VLSI rose from an estimated $3,000,000 to over 3 $10,000,000, and a state-of-the-art VLSI fabrication facility costing...Research" 11:30 John Melngailic , "Submicron Structures Research at M.I.T." 11:55 Dimitri A. Antoniadis, "Status of the M.I.T. LSI Fabrication Facility ...1984. Contributions were made by Prof. Antoniadis and, to a small degree, Pro£ Glasser. Objective: • To develop techniques for fabricating integrated
A Systolic VLSI Design of a Pipeline Reed-solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1984-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
A VLSI design of a pipeline Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1985-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
Testing Methods for Integrated Circuit Chips.
1986-03-27
DWf <I IAV ~IMi MORY OUT LOGIC~~ IPOGRAM ASYC S’E4i E...* 16o, CO% T ROL CO%TROL 32 Figure 2 . 14 VLSI Tester Block Diagram. registers, memory and test...neral-pIurpos’ processor wi th standard bus- inte-rfaco se-rves as,- th- test control Ii’r and ( 2 ) a c-ustom VLSI test Controller inti-rfacing direc(_t1...Engineering 2 WTWTY ABSTRACT Provision for the functional testing of fabricated VLSI chips frequently involves as much design effort as the orig- _ inal
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.
Feng, Lichen; Li, Zunchao; Wang, Yuanfa
2018-02-01
Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.
NASA Technical Reports Server (NTRS)
Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph
1990-01-01
The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.
A cost-effective methodology for the design of massively-parallel VLSI functional units
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Sriram, G.; Desouza, J.
1993-01-01
In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.
Hybrid VLSI/QCA Architecture for Computing FFTs
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew
2003-01-01
A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.
Princeton VLSI Project: Semi-Annual Report.
1982-11-01
already fully defined the new language and implementation is now under way o [7]. AMl differs from AU in two essential ways. First, it is based on...Our main thesis is that the VLSI design task can be profitably thought of as a progremmiW task, as opposed to a geometric editing task. We believe...S. Thesis , MIT, EECS Department, June, 1980. [4] Batali, J., Mayle, N., Shrobe, H., Sussman, G., Weise, D., "The DPL/Daedalus Design Environment
Using Ant Colony Optimization for Routing in VLSI Chips
NASA Astrophysics Data System (ADS)
Arora, Tamanna; Moses, Melanie
2009-04-01
Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.
1984-04-01
Ousterhout, G.T. Hamachi, R.N. Mayo, W.S. Scott, and G.S. Taylor , "A Collection of Papers on Magic," Technical Report No. UCB/CSD 83/154, Computer Science...Division, University of California, Berkeley, December 1983. (3) J.K Ousterhout, G.T. Hamachi, R.N. Mayo, W.S. Scott, and G.S. Taylor , "Magic: A...VLSI Layout System." to appear, Slst Design Automation Confer- ence, June 1984. (4) G.S. Taylor and J.K Ousterhout, "Magic’s Incremental Design-Rule
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
On VLSI Design of Rank-Order Filtering using DCRAM Architecture
Lin, Meng-Chun; Dung, Lan-Rong
2009-01-01
This paper addresses on VLSI design of rank-order filtering (ROF) with a maskable memory for real-time speech and image processing applications. Based on a generic bit-sliced ROF algorithm, the proposed design uses a special-defined memory, called the dual-cell random-access memory (DCRAM), to realize major operations of ROF: threshold decomposition and polarization. Using the memory-oriented architecture, the proposed ROF processor can benefit from high flexibility, low cost and high speed. The DCRAM can perform the bit-sliced read, partial write, and pipelined processing. The bit-sliced read and partial write are driven by maskable registers. With recursive execution of the bit-slicing read and partial write, the DCRAM can effectively realize ROF in terms of cost and speed. The proposed design has been implemented using TSMC 0.18 μm 1P6M technology. As shown in the result of physical implementation, the core size is 356.1 × 427.7μm2 and the VLSI implementation of ROF can operate at 256 MHz for 1.8V supply. PMID:19865599
NASA Astrophysics Data System (ADS)
Lee, El-Hang; Lee, S. G.; O, B. H.; Park, S. G.; Noh, H. S.; Kim, K. H.; Song, S. H.
2006-09-01
A collective overview and review is presented on the original work conducted on the theory, design, fabrication, and in-tegration of micro/nano-scale optical wires and photonic devices for applications in a newly-conceived photonic systems called "optical printed circuit board" (O-PCBs) and "VLSI photonic integrated circuits" (VLSI-PIC). These are aimed for compact, high-speed, multi-functional, intelligent, light-weight, low-energy and environmentally friendly, low-cost, and high-volume applications to complement or surpass the capabilities of electrical PCBs (E-PCBs) and/or VLSI electronic integrated circuit (VLSI-IC) systems. These consist of 2-dimensional or 3-dimensional planar arrays of micro/nano-optical wires and circuits to perform the functions of all-optical sensing, storing, transporting, processing, switching, routing and distributing optical signals on flat modular boards or substrates. The integrated optical devices include micro/nano-scale waveguides, lasers, detectors, switches, sensors, directional couplers, multi-mode interference devices, ring-resonators, photonic crystal devices, plasmonic devices, and quantum devices, made of polymer, silicon and other semiconductor materials. For VLSI photonic integration, photonic crystals and plasmonic structures have been used. Scientific and technological issues concerning the processes of miniaturization, interconnection and integration of these systems as applicable to board-to-board, chip-to-chip, and intra-chip integration, are discussed along with applications for future computers, telecommunications, and sensor-systems. Visions and challenges toward these goals are also discussed.
Artificial immune system algorithm in VLSI circuit configuration
NASA Astrophysics Data System (ADS)
Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd
2017-08-01
In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.
Periodically Self Restoring Redundant Systems for VLSI Based Highly Reliable Design,
1984-01-01
fault tolerance technique for realizing highly reliable computer systems for critical control applications . However, VL.SI technology has imposed a...operating correctly; failed critical real time control applications . n modules are discarded from the vote. the classical "static" voted redundancy...redundant modules are failure number of InterconnecttIon3. This results In f aree. However, for applications requiring higm modular complexity because
Real-Time Reed-Solomon Decoder
NASA Technical Reports Server (NTRS)
Maki, Gary K.; Cameron, Kelly B.; Owsley, Patrick A.
1994-01-01
Generic Reed-Solomon decoder fast enough to correct errors in real time in practical applications designed to be implemented in fewer and smaller very-large-scale integrated, VLSI, circuit chips. Configured to operate in pipelined manner. One outstanding aspect of decoder design is that Euclid multiplier and divider modules contain Galoisfield multipliers configured as combinational-logic cells. Operates at speeds greater than older multipliers. Cellular configuration highly regular and requires little interconnection area, making it ideal for implementation in extraordinarily dense VLSI circuitry. Flight electronics single chip version of this technology implemented and available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
Cascade Back-Propagation Learning in Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2003-01-01
The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.
A subthreshold aVLSI implementation of the Izhikevich simple neuron model.
Rangan, Venkat; Ghosh, Abhishek; Aparin, Vladimir; Cauwenberghs, Gert
2010-01-01
We present a circuit architecture for compact analog VLSI implementation of the Izhikevich neuron model, which efficiently describes a wide variety of neuron spiking and bursting dynamics using two state variables and four adjustable parameters. Log-domain circuit design utilizing MOS transistors in subthreshold results in high energy efficiency, with less than 1pJ of energy consumed per spike. We also discuss the effects of parameter variations on the dynamics of the equations, and present simulation results that replicate several types of neural dynamics. The low power operation and compact analog VLSI realization make the architecture suitable for human-machine interface applications in neural prostheses and implantable bioelectronics, as well as large-scale neural emulation tools for computational neuroscience.
The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.
1982-01-01
Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.
VLSI architectures for computing multiplications and inverses in GF(2m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.
1985-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2-m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.
1983-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
Noise-margin limitations on gallium-arsenide VLSI
NASA Technical Reports Server (NTRS)
Long, Stephen I.; Sundaram, Mani
1988-01-01
Two factors which limit the complexity of GaAs MESFET VLSI circuits are considered. Power dissipation sets an upper complexity limit for a given logic circuit implementation and thermal design. Uniformity of device characteristics and the circuit configuration determines the electrical functional yield. Projection of VLSI complexity based on these factors indicates that logic chips of 15,000 gates are feasible with the most promising static circuits if a maximum power dissipation of 5 W per chip is assumed. While lower power per gate and therefore more gates per chip can be obtained by using a popular E/D FET circuit, yields are shown to be small when practical device parameter tolerances are applied. Further improvements in materials, devices, and circuits wil be needed to extend circuit complexity to the range currently dominated by silicon.
VLSI architectures for computing multiplications and inverses in GF(2m).
Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S
1985-08-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.
Sequence invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, S.; Manjunath, S.
1990-01-01
A synthesis method and new VLSI architecture are introduced to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. A design method is proposed that utilizes BTS logic to implement regular and dense circuits. A given state sequence can be programmed with power supply connections or dynamically reallocated if stored in a register. Arbitrary flow table sequences can be modified or programmed to dynamically alter the function of the machine. This allows VLSI controllers to be designed with the programmability of a general purpose processor but with the compact size and performance of dedicated logic.
Sequence-invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R.; Manjunath, Shamanna K.; Maki, Gary K.
1991-01-01
A synthesis method and an MOS VLSI architecture are presented to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. The design method utilizes binary tree structured (BTS) logic to implement regular and dense circuits. The desired state sequence can be hardwired with power supply connections or can be dynamically reallocated if stored in a register. This allows programmable VLSI controllers to be designed with a compact size and performance approaching that of dedicated logic. Results of ICV implementations are reported and an example sequence-invariant state machine is contrasted with implementations based on traditional methods.
NASA Astrophysics Data System (ADS)
McKenzie, Neil
1989-12-01
We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
Electronic shift register memory based on molecular electron-transfer reactions
NASA Technical Reports Server (NTRS)
Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.
1989-01-01
The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.
Overlay Tolerances For VLSI Using Wafer Steppers
NASA Astrophysics Data System (ADS)
Levinson, Harry J.; Rice, Rory
1988-01-01
In order for VLSI circuits to function properly, the masking layers used in the fabrication of those devices must overlay each other to within the manufacturing tolerance incorporated in the circuit design. The capabilities of the alignment tools used in the masking process determine the overlay tolerances to which circuits can be designed. It is therefore of considerable importance that these capabilities be well characterized. Underestimation of the overlay accuracy results in unnecessarily large devices, resulting in poor utilization of wafer area and possible degradation of device performance. Overestimation will result in significant yield loss because of the failure to conform to the tolerances of the design rules. The proper methodology for determining the overlay capabilities of wafer steppers, the most commonly used alignment tool for the production of VLSI circuits, is the subject of this paper. Because cost-effective manufacturing process technology has been the driving force of VLSI, the impact on productivity is a primary consideration in all discussions. Manufacturers of alignment tools advertise the capabilities of their equipment. It is notable that no manufacturer currently characterizes his aligners in a manner consistent with the requirements of producing very large integrated circuits, as will be discussed. This has resulted in the situation in which the evaluation and comparison of the capabilities of alignment tools require the attention of a lithography specialist. Unfortunately, lithographic capabilities must be known by many other people, particularly the circuit designers and the managers responsible for the financial consequences of the high prices of modern alignment tools. All too frequently, the designer or manager is confronted with contradictory data, one set coming from his lithography specialist, and the other coming from a sales representative of an equipment manufacturer. Since the latter generally attempts to make his merchandise appear as attractive as possible, the lithographer is frequently placed in the position of having to explain subtle issues in order to justify his decisions. It is the purpose of this paper to provide that explanation.
Research News: Are VLSI Microcircuits Too Hard to Design?
ERIC Educational Resources Information Center
Robinson, Arthur L.
1980-01-01
This research news article on microelectronics discusses the scientific challenge the integrated circuit industry will have in the next decade, for designing the complicated microcircuits made possible by advancing miniaturization technology. (HM)
Embeddable Reconfigurable Neuroprocessors
NASA Technical Reports Server (NTRS)
Daud, Taher; Duong, Tuan; Langenbacher, Harry; Tran, Mua; Thakoor, Anil
1993-01-01
Reconfigurable and cascadable building block neural network chips, fabricated using analog VLSI design tools, are interfaced to a PC. The building block chip designs, the cascadability and the hardware-in-the-loop supervised learning aspects of these chips are described.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Chang, J. J.; Shyu, H. C.; Reed, I. S.
1986-01-01
A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-pw technology.
NASA Technical Reports Server (NTRS)
Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.
1987-01-01
A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.
VLSI Design of Trusted Virtual Sensors.
Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada
2018-01-25
This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).
VLSI Design of Trusted Virtual Sensors
2018-01-01
This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141
A VLSI implementation of DCT using pass transistor technology
NASA Technical Reports Server (NTRS)
Kamath, S.; Lynn, Douglas; Whitaker, Sterling
1992-01-01
A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.
Layout pattern analysis using the Voronoi diagram of line segments
NASA Astrophysics Data System (ADS)
Dey, Sandeep Kumar; Cheilaris, Panagiotis; Gabrani, Maria; Papadopoulou, Evanthia
2016-01-01
Early identification of problematic patterns in very large scale integration (VLSI) designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time, such a tool selects only a fraction of possible patterns which have a probable area of failure, with the risk of missing some problematic patterns. We introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of line-segments as derived from the edges of VLSI design shapes. Designers put line segments around the problematic locations in patterns called "gauges," along which the critical distance is measured. The gauge center is the midpoint of a gauge. We first use the Voronoi diagram of VLSI shapes to identify possible problematic locations, represented as gauge centers. Then we use the derived locations to extract windows containing the problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of the design polygons. We perform experiments for pattern selection in a portion of a 22-nm random logic design layout. The design layout had 38,584 design polygons (consisting of 199,946 line segments) on layer Mx, and 7079 markers generated by an optical rule checker (ORC) tool. The optical rules specify requirements for printing circuits with minimum dimension. Markers are the locations of some optical rule violations in the layout. We verify our approach by comparing the coverage of our extracted patterns to the ORC-generated markers. We further derive a similarity measure between patterns and between layouts. The similarity measure helps to identify a set of representative gauges that reduces the number of patterns for analysis.
Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI
NASA Technical Reports Server (NTRS)
Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.
1985-01-01
The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
NASA Astrophysics Data System (ADS)
Baviere, Ph.
Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
Ultra high speed image processing techniques. [electronic packaging techniques
NASA Technical Reports Server (NTRS)
Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.
1981-01-01
Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.
WARP: Weight Associative Rule Processor. A dedicated VLSI fuzzy logic megacell
NASA Technical Reports Server (NTRS)
Pagni, A.; Poluzzi, R.; Rizzotto, G. G.
1992-01-01
During the last five years Fuzzy Logic has gained enormous popularity in the academic and industrial worlds. The success of this new methodology has led the microelectronics industry to create a new class of machines, called Fuzzy Machines, to overcome the limitations of traditional computing systems when utilized as Fuzzy Systems. This paper gives an overview of the methods by which Fuzzy Logic data structures are represented in the machines (each with its own advantages and inefficiencies). Next, the paper introduces WARP (Weight Associative Rule Processor) which is a dedicated VLSI megacell allowing the realization of a fuzzy controller suitable for a wide range of applications. WARP represents an innovative approach to VLSI Fuzzy controllers by utilizing different types of data structures for characterizing the membership functions during the various stages of the Fuzzy processing. WARP dedicated architecture has been designed in order to achieve high performance by exploiting the computational advantages offered by the different data representations.
A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; King, Brent
1995-01-01
This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.
Design and Implementation of a New Real-Time Frequency Sensor Used as Hardware Countermeasure
Jiménez-Naharro, Raúl; Gómez-Galán, Juan Antonio; Sánchez-Raya, Manuel; Gómez-Bravo, Fernando; Pedro-Carrasco, Manuel
2013-01-01
A new digital countermeasure against attacks related to the clock frequency is –presented. This countermeasure, known as frequency sensor, consists of a local oscillator, a transition detector, a measurement element and an output block. The countermeasure has been designed using a full-custom technique implemented in an Application-Specific Integrated Circuit (ASIC), and the implementation has been verified and characterized with an integrated design using a 0.35 μm standard Complementary Metal Oxide Semiconductor (CMOS) technology (Very Large Scale Implementation—VLSI implementation). The proposed solution is configurable in resolution time and allowed range of period, achieving a minimum resolution time of only 1.91 ns and an initialization time of 5.84 ns. The proposed VLSI implementation shows better results than other solutions, such as digital ones based on semi-custom techniques and analog ones based on band pass filters, all design parameters considered. Finally, a counter has been used to verify the good performance of the countermeasure in avoiding the success of an attack. PMID:24008285
A VLSI pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.
1986-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
A new eddy current model for magnetic bearing control system design
NASA Technical Reports Server (NTRS)
Feeley, Joseph J.; Ahlstrom, Daniel J.
1992-01-01
This paper describes a new VLSI-based controller for the implementation of a Linear-Quadratic-Gaussian (LQG) theory-based control system. Use of the controller is demonstrated by design of a controller for a magnetic bearing and its performance is evaluated by computer simulation.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
1985-09-01
Design Language Xi." International Conference on Computer Design, pp. 652-655. 1983. [GAJ 84] D. D. Gajski and J. J. Bozek. "ARSENIC: Methodology and...Report R-1015 UIUL-ENG 84-2209. August 1984. [LUR 84] C. Lursinsap and D. Gajski , "Cell Compilation with Constraints." Proceedings of the 21st Design
Very Large Scale Integration (VLSI).
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…
The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Shao, H. M.; Deutsch, L. J.
1987-01-01
The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.
GaAs VLSI for aerospace electronics
NASA Technical Reports Server (NTRS)
Larue, G.; Chan, P.
1990-01-01
Advanced aerospace electronics systems require high-speed, low-power, radiation-hard, digital components for signal processing, control, and communication applications. GaAs VLSI devices provide a number of advantages over silicon devices including higher carrier velocities, ability to integrate with high performance optical devices, and high-resistivity substrates that provide very short gate delays, good isolation, and tolerance to many forms of radiation. However, III-V technologies also have disadvantages, such as lower yield compared to silicon MOS technology. Achieving very large scale integration (VLSI) is particularly important for fast complex systems. At very short gate delays (less than 100 ps), chip-to-chip interconnects severely degrade circuit clock rates. Complex systems, therefore, benefit greatly when as many gates as possible are placed on a single chip. To fully exploit the advantages of GaAs circuits, attention must be focused on achieving high integration levels by reducing power dissipation, reducing the number of devices per logic function, and providing circuit designs that are more tolerant to process and environmental variations. In addition, adequate noise margin must be maintained to ensure a practical yield.
GaAs VLSI technology and circuit elements for DSP
NASA Astrophysics Data System (ADS)
Mikkelson, James M.
1990-10-01
Recent progress in digital GaAs circuit performance and complexity is presented to demonstrate the current capabilities of GaAs components. High density GaAs process technology and circuit design techniques are described and critical issues for achieving favorable complexity speed power and cost tradeoffs are reviewed. Some DSP building blocks are described to provide examples of what types of DSP systems could be implemented with present GaAs technology. DIGITAL GaAs CIRCUIT CAPABILITIES In the past few years the capabilities of digital GaAs circuits have dramatically increased to the VLSI level. Major gains in circuit complexity and power-delay products have been achieved by the use of silicon-like process technologies and simple circuit topologies. The very high speed and low power consumption of digital GaAs VLSI circuits have made GaAs a desirable alternative to high performance silicon in hardware intensive high speed system applications. An example of the performance and integration complexity available with GaAs VLSI circuits is the 64x64 crosspoint switch shown in figure 1. This switch which is the most complex GaAs circuit currently available is designed on a 30 gate GaAs gate array. It operates at 200 MHz and dissipates only 8 watts of power. The reasons for increasing the level of integration of GaAs circuits are similar to the reasons for the continued increase of silicon circuit complexity. The market factors driving GaAs VLSI are system design methodology system cost power and reliability. System designers are hesitant or unwilling to go backwards to previous design techniques and lower levels of integration. A more highly integrated system in a lower performance technology can often approach the performance of a system in a higher performance technology at a lower level of integration. Higher levels of integration also lower the system component count which reduces the system cost size and power consumption while improving the system reliability. For large gate count circuits the power per gate must be minimized to prevent reliability and cooling problems. The technical factors which favor increasing GaAs circuit complexity are primarily related to reducing the speed and power penalties incurred when crossing chip boundaries. Because the internal GaAs chip logic levels are not compatible with standard silicon I/O levels input receivers and output drivers are needed to convert levels. These I/O circuits add significant delay to logic paths consume large amounts of power and use an appreciable portion of the die area. The effects of these I/O penalties can be reduced by increasing the ratio of core logic to I/O on a chip. DSP operations which have a large number of logic stages between the input and the output are ideal candidates to take advantage of the performance of GaAs digital circuits. Figure 2 is a schematic representation of the I/O penalties encountered when converting from ECL levels to GaAs
Summary of workshop on the application of VLSI for robotic sensing
NASA Technical Reports Server (NTRS)
Brooks, T.; Wilcox, B.
1984-01-01
It was one of the objectives of the considered workshop to identify near, mid, and far-term applications of VLSI for robotic sensing and sensor data preprocessing. The workshop was also to indicate areas in which VLSI technology can provide immediate and future payoffs. A third objective is related to the promotion of dialog and collaborative efforts between research communities, industry, and government. The workshop was held on March 24-25, 1983. Conclusions and recommendations are discussed. Attention is given to the need for a pixel correction chip, an image sensor with 10,000 dynamic range, VLSI enhanced architectures, the need for a high-density serpentine memory, an LSI-tactile sensing program, an analog-signal preprocessor chip, a smart strain gage, a protective proximity envelope, a VLSI-proximity sensor program, a robot-net chip, and aspects of silicon micromechanics.
PLA realizations for VLSI state machines
NASA Technical Reports Server (NTRS)
Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.
1990-01-01
A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.
Single board system for fuzzy inference
NASA Technical Reports Server (NTRS)
Symon, James R.; Watanabe, Hiroyuki
1991-01-01
The very large scale integration (VLSI) implementation of a fuzzy logic inference mechanism allows the use of rule-based control and decision making in demanding real-time applications. Researchers designed a full custom VLSI inference engine. The chip was fabricated using CMOS technology. The chip consists of 688,000 transistors of which 476,000 are used for RAM memory. The fuzzy logic inference engine board system incorporates the custom designed integrated circuit into a standard VMEbus environment. The Fuzzy Logic system uses Transistor-Transistor Logic (TTL) parts to provide the interface between the Fuzzy chip and a standard, double height VMEbus backplane, allowing the chip to perform application process control through the VMEbus host. High level C language functions hide details of the hardware system interface from the applications level programmer. The first version of the board was installed on a robot at Oak Ridge National Laboratory in January of 1990.
VLSI processors for signal detection in SETI
NASA Technical Reports Server (NTRS)
Duluk, J. F.; Linscott, I. R.; Peterson, A. M.; Burr, J.; Ekroot, B.; Twicken, J.
1989-01-01
The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.
VLSI processors for signal detection in SETI.
Duluk, J F; Linscott, I R; Peterson, A M; Burr, J; Ekroot, B; Twicken, J
1989-01-01
The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.
Microprocessor Design Using Hardware Description Language
ERIC Educational Resources Information Center
Mita, Rosario; Palumbo, Gaetano
2008-01-01
The following paper has been conceived to deal with the contents of some lectures aimed at enhancing courses on digital electronic, microelectronic or VLSI systems. Those lectures show how to use a hardware description language (HDL), such as the VHDL, to specify, design and verify a custom microprocessor. The general goal of this work is to teach…
VLSI design of an RSA encryption/decryption chip using systolic array based architecture
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi
2016-09-01
This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.
Chen, Chiung-An; Chen, Shih-Lun; Huang, Hong-Yi; Luo, Ching-Hsing
2012-11-22
In this paper, a low-cost, low-power and high performance micro control unit (MCU) core is proposed for wireless body sensor networks (WBSNs). It consists of an asynchronous interface, a register bank, a reconfigurable filter, a slop-feature forecast, a lossless data encoder, an error correct coding (ECC) encoder, a UART interface, a power management (PWM), and a multi-sensor controller. To improve the system performance and expansion abilities, the asynchronous interface is added for handling signal exchanges between different clock domains. To eliminate the noise of various bio-signals, the reconfigurable filter is created to provide the functions of average, binomial and sharpen filters. The slop-feature forecast and the lossless data encoder is proposed to reduce the data of various biomedical signals for transmission. Furthermore, the ECC encoder is added to improve the reliability for the wireless transmission and the UART interface is employed the proposed design to be compatible with wireless devices. For long-term healthcare monitoring application, a power management technique is developed for reducing the power consumption of the WBSN system. In addition, the proposed design can be operated with four different bio-sensors simultaneously. The proposed design was successfully tested with a FPGA verification board. The VLSI architecture of this work contains 7.67-K gate counts and consumes the power of 5.8 mW or 1.9 mW at 100 MHz or 133 MHz processing rate using a TSMC 0.18 μm or 0.13 μm CMOS process. Compared with previous techniques, this design achieves higher performance, more functions, more flexibility and higher compatibility than other micro controller designs.
VLSI technology for smaller, cheaper, faster return link systems
NASA Technical Reports Server (NTRS)
Nanzetta, Kathy; Ghuman, Parminder; Bennett, Toby; Solomon, Jeff; Dowling, Jason; Welling, John
1994-01-01
Very Large Scale Integration (VLSI) Application-specific Integrated Circuit (ASIC) technology has enabled substantially smaller, cheaper, and more capable telemetry data systems. However, the rapid growth in available ASIC fabrication densities has far outpaced the application of this technology to telemetry systems. Available densities have grown by well over an order magnitude since NASA's Goddard Space Flight Center (GSFC) first began developing ASIC's for ground telemetry systems in 1985. To take advantage of these higher integration levels, a new generation of ASIC's for return link telemetry processing is under development. These new submicron devices are designed to further reduce the cost and size of NASA return link processing systems while improving performance. This paper describes these highly integrated processing components.
Design and implementation of highly parallel pipelined VLSI systems
NASA Astrophysics Data System (ADS)
Delange, Alphonsus Anthonius Jozef
A methodology and its realization as a prototype CAD (Computer Aided Design) system for the design and analysis of complex multiprocessor systems is presented. The design is an iterative process in which the behavioral specifications of the system components are refined into structural descriptions consisting of interconnections and lower level components etc. A model for the representation and analysis of multiprocessor systems at several levels of abstraction and an implementation of a CAD system based on this model are described. A high level design language, an object oriented development kit for tool design, a design data management system, and design and analysis tools such as a high level simulator and graphics design interface which are integrated into the prototype system and graphics interface are described. Procedures for the synthesis of semiregular processor arrays, and to compute the switching of input/output signals, memory management and control of processor array, and sequencing and segmentation of input/output data streams due to partitioning and clustering of the processor array during the subsequent synthesis steps, are described. The architecture and control of a parallel system is designed and each component mapped to a module or module generator in a symbolic layout library, compacted for design rules of VLSI (Very Large Scale Integration) technology. An example of the design of a processor that is a useful building block for highly parallel pipelined systems in the signal/image processing domains is given.
Parallel VLSI architecture emulation and the organization of APSA/MPP
NASA Technical Reports Server (NTRS)
Odonnell, John T.
1987-01-01
The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms.
VLSI (Very Large Scale Integrated) Design of a 16 Bit Very Fast Pipelined Carry Look Ahead Adder.
1983-09-01
the ability for systems engineers to custom design digital integrated circuits. Until recently, the design of integrated circuits has been...traditionally carried out by a select group of logic designers working in semiconductor laboratories. Systems engineers had to "make do" or "fit in" the...products of these labs to realize their designs. The systems engineers had little participation in the actual design of the chip. The MED and CONWAY design
NASA Astrophysics Data System (ADS)
An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans
2015-04-01
This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.
Packaging of ferroelectric liquid crystal-on-silicon spatial light modulators
NASA Astrophysics Data System (ADS)
Lin, W.; Morozova, Nina D.; Ju, TehHua; Zhang, Weidong; Lee, Yung-Cheng; McKnight, Douglas J.; Johnson, Kristina M.
1996-11-01
A self-pulling soldering technology has been demonstrated for assembling liquid crystal on silicon (LCOS) spatial light modulators (SLMs). One of the major challenges in manufacturing the LCOS modules is to reproducibly control the thickness of the gap between the very large scale integrated circuit (VLSI) chip and the cover glass. The liquid crystal material is sandwiched between the VLSI chop and the cover glass which is coated with a transparent conductor. Solder joints with different profiles and sizes have been designed to provide surface tension forces to control the gap accommodating the ferroelectric liquid crystal layer in the range of a micron level with sub- micron uniformity. The optimum solder joint design is defined as a joint that results in the maximum pulling force. This technology provides an automatic, batch assembly process for a LCOS SLM through one reflow process. Fluxless soldering technology is used to assemble the module. This approach avoids residues from chemical of flux and oxides, and eliminates potential contamination to the device. Two different LCOS SLM designs and the process optimization are described.
Defense Acquisitions Acronyms and Terms
2012-12-01
Computer-Aided Design CADD Computer-Aided Design and Drafting CAE Component Acquisition Executive; Computer-Aided Engineering CAIV Cost As an...Radiation to Ordnance HFE Human Factors Engineering HHA Health Hazard Assessment HNA Host-Nation Approval HNS Host-Nation Support HOL High -Order...Engineering Change Proposal VHSIC Very High Speed Integrated Circuit VLSI Very Large Scale Integration VOC Volatile Organic Compound W WAN Wide
Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance
2007-03-31
IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation
Image processing via VLSI: A concept paper
NASA Technical Reports Server (NTRS)
Nathan, R.
1982-01-01
Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.
Hardware Algorithm Implementation for Mission Specific Processing
2008-03-01
knowledge about the VLSI technology and understands VHDL, scripting, and intergrating the script in Cadencersoftware pro- gram or Modelsimr. The main...possible to have a trade off between parallel and serial logic design for the circuit. Power can be saved by using parallization, pipelining, or a
The Xpress Transfer Protocol (XTP): A tutorial (expanded version)
NASA Technical Reports Server (NTRS)
Sanders, Robert M.; Weaver, Alfred C.
1990-01-01
The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4.
Bio-Inspired Microsystem for Robust Genetic Assay Recognition
Lue, Jaw-Chyng; Fang, Wai-Chi
2008-01-01
A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679
NASA Astrophysics Data System (ADS)
Makhijani, Vinod B.; Przekwas, Andrzej J.
2002-10-01
This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).
An Efficient VLSI Architecture of the Enhanced Three Step Search Algorithm
NASA Astrophysics Data System (ADS)
Biswas, Baishik; Mukherjee, Rohan; Saha, Priyabrata; Chakrabarti, Indrajit
2016-09-01
The intense computational complexity of any video codec is largely due to the motion estimation unit. The Enhanced Three Step Search is a popular technique that can be adopted for fast motion estimation. This paper proposes a novel VLSI architecture for the implementation of the Enhanced Three Step Search Technique. A new addressing mechanism has been introduced which enhances the speed of operation and reduces the area requirements. The proposed architecture when implemented in Verilog HDL on Virtex-5 Technology and synthesized using Xilinx ISE Design Suite 14.1 achieves a critical path delay of 4.8 ns while the area comes out to be 2.9K gate equivalent. It can be incorporated in commercial devices like smart-phones, camcorders, video conferencing systems etc.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
VLSI Implementation of Neuromorphic Learning Networks
1993-03-31
AND DATES COVEREDFINAL/O1 AUG 90 TO 31 MAR 93 4. TITLE AND SUBTII1L S. FUNDING NUMBERS VLSI IMPLEMENTATION OF NEUROMORPHIC LEARNING NETWORKS (U) 6...Standard Form 298 (Rev 2-89) rtrfbc byv nN$I A Z’Si - 8 9- A* qip. COVER SHEET VLSI Implementation of Neuromorphic Learning Networks Contract Number... Neuromorphic Learning Networks Sponsored by Defense Advanced Research Projects Agency DARPA Order No. 7013 Monitored by AFOSR Under Contract No. F49620-90-C
Electro-optic techniques for VLSI interconnect
NASA Astrophysics Data System (ADS)
Neff, J. A.
1985-03-01
A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.
1982-11-01
to occur). When a rectangle is inserted, all currently selected items are de -selected, and the newly inserted rectangle is selected. This makes it...Items are de - * selected before the selection takes place. A selected symbol instance is displayed with a bold outline, and a selected rectangle edge...symbol instance or set of rectangle edges, everything previously selected is first de -selected. If the selected object is a reference point the old
Recursive computer architecture for VLSI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treleaven, P.C.; Hopkins, R.P.
1982-01-01
A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.
Laser Microchemistry : A Powerful Tool For VLSI
NASA Astrophysics Data System (ADS)
Tonneau, Didier; Guern, Yves; Pelous, Gerard
1989-01-01
Interconnection direct writing on ICs is possible by localized laser-assisted Chemical Vapor Deposition. Recently we have developed and marketed a new laser microchemistry tool particularly designed for VLSI prototypes rewiring. By dissociating Ni(CO)4 molecules, Ni lines can be written at speeds higher than 5 gm/s under laser induced temperature as low as 400°C. At the same temperature tungsten stripes can be driven from decomposition of WF6-H2 mixtures. However the tungsten deposition rate is about two orders of magnitude lower than the nickel growth rate in the same temperature conditions. The resistivities of the deposits are in both cases around 10 μΩ.cm. Silicon dioxide layers can be promoted from dissociation of a Si2H6-N20 mixture under surface temperature around 500°C. These metal and insulator deposition basic steps have been integrated in a complete metal bridging process suitable for the last interconnection level of a VLSI circuit. This process has been firstly estimated from a functional point of view, by electrical characterizations realized on test patterns entirely drawn by laser chemistry. At least, by measuring the time necessary to perform a metal bridge, the process has been evaluated from an economical point of view.
A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.
Shimonomura, Kazuhiro; Yagi, Tetsuya
2005-07-01
In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.
A unified approach to VLSI layout automation and algorithm mapping on processor arrays
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Pattabiraman, S.; Srinivasan, Vinoo N.
1993-01-01
Development of software tools for designing supercomputing systems is highly complex and cost ineffective. To tackle this a special purpose PAcube silicon compiler which integrates different design levels from cell to processor arrays has been proposed. As a part of this, we present in this paper a novel methodology which unifies the problems of Layout Automation and Algorithm Mapping.
The Hermod Behavioral Synthesis System
1988-06-08
LDescription 1 lib tech-independent Transformation & Parser Optimization lib Hardware • g - utSynhesze Generator li Datapath lb Hardware liCotllb...Proc. 22nd Design Automation Conference, ACM/IEEE, June 1985, pp. 475-481. [7] G . De Micheli, "Synthesis of Control Systems", in Design Systems for...VLSI Circuits: Logic Synthesis and Silicon Compilation, G . De Micheli, A. Sangiovanni-Vincentelli, and P. Antognetti, (editor), Martinus Nijhoff
VLSI Design Techniques for Floating-Point Computation
1988-11-18
J. C. Gibson, The Gibson Mix, IBM Systems Development Division Tech. Report(June 1970). [Heni83] A. Heninger, The Zilog Z8070 Floating-Point...Broadcast Oock Gen. ’ itp Divide Module Module byN Module Oock Communication l I T Oock Communication Bus Figure 7.2. Clock Distribution between
A Coherent VLSI Design Environment.
1985-09-30
deviation were only a few percent. If the number of paths with a delay close to 9ns were large, even more statistical accuracy would be required to...Zippel, 1Capsules, IGPLAN Bulletn, vol. 18, no. 6, waveforms. In the bottom window, the currents into the pp. 164-169, 1983. depletion transitors are
A Fast Turn-Around Facility for Very Large Scale Integration (VLSI)
1982-06-01
statistics determination, the first test mask set will use the MATRIX chip design which was recently developed here at Stanford. This chip provides...reached when the basewidth is reduced to zero. Such devices, variably known as depleted- base transistors or bipolar static-induction transitors , have been
Computer Aided Design of Integrated Circuit Fabrication Processes for VLSI Devices
1980-01-01
diffusion coefficient and surface conc,,tration of the chlorine as well as any field present; X is related to the ratio ol the diffusion coefficient to...with polysilicon gat(. .ed contacts, the interaction of oxidation, segregation and diffusion in all regions of the simulation space is a critical
VLSI-based video event triggering for image data compression
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Asynchronous transfer mode distribution network by use of an optoelectronic VLSI switching chip.
Lentine, A L; Reiley, D J; Novotny, R A; Morrison, R L; Sasian, J M; Beckman, M G; Buchholz, D B; Hinterlong, S J; Cloonan, T J; Richards, G W; McCormick, F B
1997-03-10
We describe a new optoelectronic switching system demonstration that implements part of the distribution fabric for a large asynchronous transfer mode (ATM) switch. The system uses a single optoelectronic VLSI modulator-based switching chip with more than 4000 optical input-outputs. The optical system images the input fibers from a two-dimensional fiber bundle onto this chip. A new optomechanical design allows the system to be mounted in a standard electronic equipment frame. A large section of the switch was operated as a 208-Mbits/s time-multiplexed space switch, which can serve as part of an ATM switch by use of an appropriate out-of-band controller. A larger section with 896 input light beams and 256 output beams was operated at 160 Mbits/s as a slowly reconfigurable space switch.
VLSI-based Video Event Triggering for Image Data Compression
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1994-01-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
An Engineering Methodology for Implementing and Testing VLSI (Very Large Scale Integrated) Circuits
1989-03-01
the pad frame and associated routing, conducted additional testing. and submitted the finished design effort to MOSIS for manufacturing. Throughout...register bank TSTCON Allows the XNOR circuitry to enter the TEST register bank PADIN Test signal to check operation of the input pad VCC Power connection...MOSSIM II simulation program. but the design offered little observability within the circuit. The initial design used 35 pins of a 40 pin pad frame
A parallel VLSI architecture for a digital filter using a number theoretic transform
NASA Technical Reports Server (NTRS)
Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.
1983-01-01
The advantages of a very large scalee integration (VLSI) architecture for implementing a digital filter using fermat number transforms (FNT) are the following: It requires no multiplication. Only additions and bit rotations are needed. It alleviates the usual dynamic range limitation for long sequence FNT's. It utilizes the FNT and inverse FNT circuits 100% of the time. The lengths of the input data and filter sequences can be arbitraty and different. It is regular, simple, and expandable, and as a consequence suitable for VLSI implementation.
Simplified microprocessor design for VLSI control applications
NASA Technical Reports Server (NTRS)
Cameron, K.
1991-01-01
A design technique for microprocessors combining the simplicity of reduced instruction set computers (RISC's) with the richer instruction sets of complex instruction set computers (CISC's) is presented. They utilize the pipelined instruction decode and datapaths common to RISC's. Instruction invariant data processing sequences which transparently support complex addressing modes permit the formulation of simple control circuitry. Compact implementations are possible since neither complicated controllers nor large register sets are required.
A second generation 50 Mbps VLSI level zero processing system prototype
NASA Technical Reports Server (NTRS)
Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby
1994-01-01
Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.
A CMOS VLSI IC for Real-Time Opto-Electronic Two-Dimensional Histogram Generation
1993-12-01
large scale integration) design; MAGIC ; CMOS; optics; image processing; 93 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATiON 19...1. Sun SPARCstation ............. .............. 6 2. Magic .................. ................... 6 a. Peg ................. .................. 7 b...38 v APPENDIX B. MAGIC CELL LAYOUTS .... ............ .. 39 APPENDIX C: SIMULATION DATA ....... ............. .. 56 A. FINITE STATE MACHINE
Exploration and Evaluation of Nanometer Low-power Multi-core VLSI Computer Architectures
2015-03-01
ICC, the Milkway database was created using the command: milkyway –galaxy –nogui –tcl –log memory.log one.tcl As stated previously, it is...EDA tools. Typically, Synopsys® tools use Milkway databases, whereas, Cadence Design System® use Layout Exchange Format (LEF) formats. To help
Optical Interconnections for VLSI Computational Systems Using Computer-Generated Holography.
NASA Astrophysics Data System (ADS)
Feldman, Michael Robert
Optical interconnects for VLSI computational systems using computer generated holograms are evaluated in theory and experiment. It is shown that by replacing particular electronic connections with free-space optical communication paths, connection of devices on a single chip or wafer and between chips or modules can be improved. Optical and electrical interconnects are compared in terms of power dissipation, communication bandwidth, and connection density. Conditions are determined for which optical interconnects are advantageous. Based on this analysis, it is shown that by applying computer generated holographic optical interconnects to wafer scale fine grain parallel processing systems, dramatic increases in system performance can be expected. Some new interconnection networks, designed to take full advantage of optical interconnect technology, have been developed. Experimental Computer Generated Holograms (CGH's) have been designed, fabricated and subsequently tested in prototype optical interconnected computational systems. Several new CGH encoding methods have been developed to provide efficient high performance CGH's. One CGH was used to decrease the access time of a 1 kilobit CMOS RAM chip. Another was produced to implement the inter-processor communication paths in a shared memory SIMD parallel processor array.
Universal programmable logic gate and routing method
NASA Technical Reports Server (NTRS)
Vatan, Farrokh (Inventor); Akarvardar, Kerem (Inventor); Mojarradi, Mohammad M. (Inventor); Fijany, Amir (Inventor); Cristoloveanu, Sorin (Inventor); Kolawa, Elzbieta (Inventor); Blalock, Benjamin (Inventor); Chen, Suheng (Inventor); Toomarian, Nikzad (Inventor)
2009-01-01
An universal and programmable logic gate based on G.sup.4-FET technology is disclosed, leading to the design of more efficient logic circuits. A new full adder design based on the G.sup.4-FET is also presented. The G.sup.4-FET can also function as a unique router device offering coplanar crossing of signal paths that are isolated and perpendicular to one another. This has the potential of overcoming major limitations in VLSI design where complex interconnection schemes have become increasingly problematic.
NASA Astrophysics Data System (ADS)
Aikawa, Satoru; Nakamura, Yasuhisa; Takanashi, Hitoshi
1994-02-01
This paper describes the performance of an outage free SXH (Synchronous Digital Hierarchy) interface 256 QAM modem. An outage free DMR (Digital Microwave Radio) is achieved by a high coding gain trellis coded SPORT QAM and Super Multicarrier modem. A new frame format and its associated circuits connect the outage free modem to the SDH interface. The newly designed VLSI's are key devices for developing the modem. As an overall modem performance, BER (bit error rate) characteristics and equipment signatures are presented. A coding gain of 4.7 dB (at a BER of 10(exp -4)) is obtained using SPORT 256 QAM and Viterbi decoding. This coding gain is realized by trellis coding as well as by increasing of transmission rate. Roll-off factor is decreased to maintain the same frequency occupation and modulation level as ordinary SDH 256 QAM modern.
Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.
Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert
2017-12-01
Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.
Built-in self-repair of VLSI memories employing neural nets
NASA Astrophysics Data System (ADS)
Mazumder, Pinaki
1998-10-01
The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.
A Coherent VLSI Design Environment.
1986-03-31
Schema were a CMOS sorter and a TTL PC board for gathering statistics from a Multibus. Neither design was completed using Schema, but at least in the...technique for automatically adjusting signal delays in an MOS system has been developed. The Dynamic Delay Adjustment (DDA) technique provides...34Synchronization Reliability in CMOS Technology," IEEE J. of Solid - State Circuits, Vol. SC-20, No. 4, pp. 880-883, 1985. * [8] J. Hohl, W. Larsen and L. Schooley
Fault Model Development for Fault Tolerant VLSI Design
1988-05-01
0 % .%. . BEIDGING FAULTS A bridging fault in a digital circuit connects two or more conducting paths of the circuit. The resistance...Melvin Breuer and Arthur Friedman, "Diagnosis and Reliable Design of Digital Systems", Computer Science Press, Inc., 1976. 4. [Chandramouli,1983] R...2138 AEDC LIBARY (TECH REPORTS FILE) MS-O0 ARNOLD AFS TN 37389-9998 USAG1 Attn: ASH-PCA-CRT Ft Huachuca AZ 85613-6000 DOT LIBRARY/iQA SECTION - ATTN
Fault-Tolerant VLSI Design Assessments for Advanced Avionics Department. Literature Review. Phase 1
1982-02-05
negative sense. Another facet of the literature review is to acquaint the researchers with the immense literature base for electronic technology applicable ...Riley, "Special Report: Semiconductor Memories are Tested Over Data-Storage Application ", Electronics, vol. 46, August 19. G. Luecke, J. P. Mize and W...Design and Evaluation of Self-Checking Systems", Report Submitted to the Mathematical and Information Science Division of the Office of Naval
Rapid Assemblers for Voxel-Based VLSI Robotics
2014-02-12
relied on coin- cell batteries with high energy density, but low power density. Each of the actuators presented requires relatively high power...The device consists of a low power DC- DC low to high voltage converter operated by 4A cell batteries and an assembler, which is a grid of electrodes...design, simulate and fabricate complex 3D machines, as well as to repair, adapt and recycle existing machines, and to perform rigorous design
LSI/VLSI design for testability analysis and general approach
NASA Technical Reports Server (NTRS)
Lam, A. Y.
1982-01-01
The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented.
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.
Artificial Cochlea Design Using Micro-Electro-Mechanical Systems
1996-12-17
FIGURE 2-9 - BLOCK DIAGRAM OF THE KATE’S MODEL ................................................ 2-13 FIGURE 2-10 -- COCHLEAR TUNING CURVES FOR KATES MODEL...2-14 FIGURE 2-11 - TUNING CURVE OF A CAT’S COCHLEA .................................................... 2-15...FIGURE 2-12 - FREQUENCY RESPONSE CURVES OF THE VLSI IMPLEMENTATIONS OF THE AN A LO G CO CH LEA
Multiplier Architecture for Coding Circuits
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
Multipliers based on new algorithm for Galois-field (GF) arithmetic regular and expandable. Pipeline structures used for computing both multiplications and inverses. Designs suitable for implementation in very-large-scale integrated (VLSI) circuits. This general type of inverter and multiplier architecture especially useful in performing finite-field arithmetic of Reed-Solomon error-correcting codes and of some cryptographic algorithms.
A Circuit Extraction System and Graphical Display for VLSI (Very Large Scale Integrated) Design.
1989-12-01
understandable as a net-list. The file contains information on the different physical layers of a polysilicon chip, not how these layers combine to form...yperc; struct vwsurf vsurf =DEFAULT_VWSURF(pixwt-ndd); stt-uct vwsurf vsurf2 DEFAULT-VWSURF(pixwfLndd); ma in) another[ Ol =IV while (anothler[0O = ’y
ERIC Educational Resources Information Center
Alexander, George
1984-01-01
Discusses small-scale integrated (SSI), medium-scale integrated (MSI), large-scale integrated (LSI), very large-scale integrated (VLSI), and ultra large-scale integrated (ULSI) chips. The development and properties of these chips, uses of gallium arsenide, Josephson devices (two superconducting strips sandwiching a thin insulator), and future…
A VLSI implementation for synthetic aperture radar image processing
NASA Technical Reports Server (NTRS)
Premkumar, A.; Purviance, J.
1990-01-01
A simple physical model for the Synthetic Aperture Radar (SAR) is presented. This model explains the one dimensional and two dimensional nature of the received SAR signal in the range and azimuth directions. A time domain correlator, its algorithm, and features are explained. The correlator is ideally suited for VLSI implementation. A real time SAR architecture using these correlators is proposed. In the proposed architecture, the received SAR data is processed using one dimensional correlators for determining the range while two dimensional correlators are used to determine the azimuth of a target. The architecture uses only three different types of custom VLSI chips and a small amount of memory.
Juswardy, Budi; Xiao, Feng; Alameh, Kamal
2009-03-16
This paper proposes a novel Opto-VLSI-based tunable true-time delay generation unit for adaptively steering the nulls of microwave phased array antennas. Arbitrary single or multiple true-time delays can simultaneously be synthesized for each antenna element by slicing an RF-modulated broadband optical source and routing specific sliced wavebands through an Opto-VLSI processor to a high-dispersion fiber. Experimental results are presented, which demonstrate the principle of the true-time delay unit through the generation of 5 arbitrary true-time delays of up to 2.5 ns each. (c) 2009 Optical Society of America
Associative Pattern Recognition In Analog VLSI Circuits
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1995-01-01
Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.
[Radiation Tolerant Electronics
NASA Technical Reports Server (NTRS)
1996-01-01
Research work in the providing radiation tolerant electronics to NASA and the commercial sector is reported herein. There are four major sections to this report: (1) Special purpose VLSI technology section discusses the status of the VLSI projects as well as the new background technologies that have been developed; (2) Lossless data compression results provide the background and direction of new data compression pursued under this grant; (3) Commercial technology transfer presents an itemization of the commercial technology transfer; and (4) Delivery of VLSI to the Government is a solution and progress report that shows how the Government and Government contractors are gaining access to the technology that has been developed by the MRC.
Cost optimization in low volume VLSI circuits
NASA Technical Reports Server (NTRS)
Cook, K. B., Jr.; Kerns, D. V., Jr.
1982-01-01
The relationship of integrated circuit (IC) cost to electronic system cost is developed using models for integrated circuit cost which are based on design/fabrication approach. Emphasis is on understanding the relationship between cost and volume for custom circuits suitable for NASA applications. In this report, reliability is a major consideration in the models developed. Results are given for several typical IC designs using off the shelf, full custom, and semicustom IC's with single and double level metallization.
1981-03-31
is included in this design . These data lines, which are bi-directional, serve a multipurpose role for control and testing. When used as input data... Group the Edges of a Picture Using a Local 13 Criterion Gerard G. Medioni 1.4. Edge Detection in Aerial Images Using V2G(x,y) 16 A. Huertas and R...the nature of VLSI systems, in which interconnections are difficult to implement. 2 More recently, the convolution problem has led to a detailed design
46 CFR 61.40-3 - Design verification testing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Design verification testing. 61.40-3 Section 61.40-3... INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...
An efficient interpolation filter VLSI architecture for HEVC standard
NASA Astrophysics Data System (ADS)
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
2010-03-01
DATES COVERED (From - To) October 2008 – October 2009 4 . TITLE AND SUBTITLE PERFORMANCE AND POWER OPTIMIZATION FOR COGNITIVE PROCESSOR DESIGN USING...Computations 2 2.2 Cognitive Models and Algorithms for Intelligent Text Recognition 4 2.2.1 Brain-State-in-a-Box Neural Network Model. 4 2.2.2...The ASIC-style design and synthesis flow for FPU 8 Figure 4 : Screen shots of the final layouts 10 Figure 5: Projected performance and power roadmap
Alpha particle-induced soft errors in microelectronic devices. I
NASA Astrophysics Data System (ADS)
Redman, D. J.; Sega, R. M.; Joseph, R.
1980-03-01
The article provides a tutorial review and trend assessment of the problem of alpha particle-induced soft errors in VLSI memories. Attention is given to an analysis of the design evolution of modern ICs, and the characteristics of alpha particles and their origin in IC packaging are reviewed. Finally, the process of an alpha particle penetrating an IC is examined.
Convolving optically addressed VLSI liquid crystal SLM
NASA Astrophysics Data System (ADS)
Jared, David A.; Stirk, Charles W.
1994-03-01
We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Kirk, Andrew G; Plant, David V; Szymanski, Ted H; Vranesic, Zvonko G; Tooley, Frank A P; Rolston, David R; Ayliffe, Michael H; Lacroix, Frederic K; Robertson, Brian; Bernier, Eric; Brosseau, Daniel F
2003-05-10
Design and implementation of a free-space optical backplane for multiprocessor applications is presented. The system is designed to interconnect four multiprocessor nodes that communicate by using multiplexed 32-bit packets. Each multiprocessor node is electrically connected to an optoelectronic VLSI chip which implements the hyperplane interconnection architecture. The chips each contain 256 optical transmitters (implemented as dual-rail multiple quantum-well modulators) and 256 optical receivers. A rigid free-space microoptical interconnection system that interconnects the transceiver chips in a 512-channel unidirectional ring is implemented. Full design, implementation, and operational details are provided.
NASA Astrophysics Data System (ADS)
Kirk, Andrew G.; Plant, David V.; Szymanski, Ted H.; Vranesic, Zvonko G.; Tooley, Frank A. P.; Rolston, David R.; Ayliffe, Michael H.; Lacroix, Frederic K.; Robertson, Brian; Bernier, Eric; Brosseau, Daniel F.
2003-05-01
Design and implementation of a free-space optical backplane for multiprocessor applications is presented. The system is designed to interconnect four multiprocessor nodes that communicate by using multiplexed 32-bit packets. Each multiprocessor node is electrically connected to an optoelectronic VLSI chip which implements the hyperplane interconnection architecture. The chips each contain 256 optical transmitters (implemented as dual-rail multiple quantum-well modulators) and 256 optical receivers. A rigid free-space microoptical interconnection system that interconnects the transceiver chips in a 512-channel unidirectional ring is implemented. Full design, implementation, and operational details are provided.
Emerging Applications for High K Materials in VLSI Technology
Clark, Robert D.
2014-01-01
The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599
VLSI for High-Speed Digital Signal Processing
1994-09-30
particular, the design, layout and fab - rication of integrated circuits. The primary project for this grant has been the design and implementation of a...targeted at 33.36 dB, and PSNR (dB) Rate ( bpp ) the FRSBC algorithm, targeted at 0.5 bits/pixel, respec- Filter FDSBC FRSBC FDSBC FRSBC tively. The filter...to mean square error d by as shown in Fig. 6, is used, yielding a total of 16 subbands. 255’ The rates, in bits per pixel ( bpp ), and the peak signal
Design and Evaluation of Fault-Tolerant VLSI/WSI Processor Arrays.
1987-12-31
studies reported in this paper. In Section .3, the reliabuility characteristics of single-level FTPA’s are discusseri. Four different type of FTPA’s are...for processor arrays are proposed and studied . Stu- dies on algorithmic and software aspects relevant to systems are reported in items 4, 5, 8, 12 and...O’Keefe M., and Fortes, J. A. B., "A Comparative Study of Two Systematic Design Methodologies for Systolic Arrays," (Long Version) International Workshop on
Information storage at the molecular level - The design of a molecular shift register memory
NASA Technical Reports Server (NTRS)
Beratan, David N.; Onuchic, Jose Nelson; Hopfield, J. J.
1989-01-01
The control of electron transfer rates is discussed and a molecular shift register memory at the molecular level is described. The memory elements are made up of molecules which can exist in either an oxidized or reduced state and the bits can be shifted between the cells with photoinduced electron transfer reactions. The device integrates designed molecules onto a VLSI substrate. A control structure to modify the flow of information along a shift register is indicated schematically.
VLSI chip-set for data compression using the Rice algorithm
NASA Technical Reports Server (NTRS)
Venbrux, J.; Liu, N.
1990-01-01
A full custom VLSI implementation of a data compression encoder and decoder which implements the lossless Rice data compression algorithm is discussed in this paper. The encoder and decoder reside on single chips. The data rates are to be 5 and 10 Mega-samples-per-second for the decoder and encoder respectively.
An Interactive Multimedia Learning Environment for VLSI Built with COSMOS
ERIC Educational Resources Information Center
Angelides, Marios C.; Agius, Harry W.
2002-01-01
This paper presents Bigger Bits, an interactive multimedia learning environment that teaches students about VLSI within the context of computer electronics. The system was built with COSMOS (Content Oriented semantic Modelling Overlay Scheme), which is a modelling scheme that we developed for enabling the semantic content of multimedia to be used…
Self-checking self-repairing computer nodes using the mirror processor
NASA Technical Reports Server (NTRS)
Tamir, Yuval
1992-01-01
Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.
Chip level modeling of LSI devices
NASA Technical Reports Server (NTRS)
Armstrong, J. R.
1984-01-01
The advent of Very Large Scale Integration (VLSI) technology has rendered the gate level model impractical for many simulation activities critical to the design automation process. As an alternative, an approach to the modeling of VLSI devices at the chip level is described, including the specification of modeling language constructs important to the modeling process. A model structure is presented in which models of the LSI devices are constructed as single entities. The modeling structure is two layered. The functional layer in this structure is used to model the input/output response of the LSI chip. A second layer, the fault mapping layer, is added, if fault simulations are required, in order to map the effects of hardware faults onto the functional layer. Modeling examples for each layer are presented. Fault modeling at the chip level is described. Approaches to realistic functional fault selection and defining fault coverage for functional faults are given. Application of the modeling techniques to single chip and bit slice microprocessors is discussed.
Routing channels in VLSI layout
NASA Astrophysics Data System (ADS)
Cai, Hong
A number of algorithms for the automatic routing of interconnections in Very Large Scale Integration (VLSI) building-block layouts are presented. Algorithms for the topological definition of channels, the global routing and the geometrical definition of channels are presented. In contrast to traditional approaches the definition and ordering of the channels is done after the global routing. This approach has the advantage that global routing information can be taken into account to select the optimal channel structure. A polynomial algorithm for the channel definition and ordering problem is presented. The existence of a conflict-free channel structure is guaranteed by enforcing a sliceable placement. Algorithms for finding the shortest connection path are described. A separate algorithm is developed for the power net routing, because the two power nets must be planarly routed with variable wire width. An integrated placement and routing system for generating building-block layout is briefly described. Some experimental results and design experiences in using the system are also presented. Very good results are obtained.
Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)
2003-01-01
Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.
Low-Power Differential SRAM design for SOC Based on the 25-um Technology
NASA Astrophysics Data System (ADS)
Godugunuri, Sivaprasad; Dara, Naveen; Sambasiva Nayak, R.; Nayeemuddin, Md; Singh, Yadu, Dr.; Veda, R. N. S. Sunil
2017-08-01
In recent, the SOC styles area unit the vast complicated styles in VLSI these SOC styles having important low-power operations problems, to comprehend this we tend to enforced low-power SRAM. However these SRAM Architectures critically affects the entire power of SOC and competitive space. To beat the higher than disadvantages, during this paper, a low-power differential SRAM design is planned. The differential SRAM design stores multiple bits within the same cell, operates at minimum in operation low-tension and space per bit. The differential SRAM design designed supported the 25-um technology using Tanner-EDA Tool.
Adaptive Optoelectronic Eyes: Hybrid Sensor/Processor Architectures
2006-11-13
corresponding calculated data. The width of the mirror stopband is proportional to the refractive index difference between the high and low index materials ...Silicon VLSI Neuron Unit Arrays 56 Development of a Single-Sided Flip-Chip Bonding Process 65 Development of High Refractive Index Diffractive Optical ...Elements (DOEs) 68 Development of High-Performance Antireflection Coatings for High Refractive Index DOEs 69 Design and Fabrication of Low Threshold
A Modular Mixed Signal VLSI Design Approach for Digital Radar Applications
2007-03-01
convenience, denote e−j 2π N nk by WN , so equation (2.2) becomes: X(k) = N−1∑ n=0 x(n)W knN , k = 0, 1, 2, ..., N − 1 (2.3) which can be expanded into... Speech , and Signal Processing, 1994. ICASSP-94., 1994 IEEE International Conference on, 3, 1994. 18. Soliman, Samir S. and Mandyam D. Srinath
ERIC Educational Resources Information Center
Bayoumi, Magdy
As part of a 3-year study to identify emerging issues and trends in technology for special education, this paper addresses the implications of very large scale integrated (VLSI) technology. The first section reviews the development of educational technology, particularly microelectronics technology, from the 1950s to the present. The implications…
A Knowledge Based Approach to VLSI CAD
1983-09-01
Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to
An optimal adder-based hardware architecture for the DCT/SA-DCT
NASA Astrophysics Data System (ADS)
Kinane, Andrew; Muresan, Valentin; O'Connor, Noel
2005-07-01
The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.
Grain-size considerations for optoelectronic multistage interconnection networks.
Krishnamoorthy, A V; Marchand, P J; Kiamilev, F E; Esener, S C
1992-09-10
This paper investigates, at the system level, the performance-cost trade-off between optical and electronic interconnects in an optoelectronic interconnection network. The specific system considered is a packet-switched, free-space optoelectronic shuffle-exchange multistage interconnection network (MIN). System bandwidth is used as the performance measure, while system area, system power, and system volume constitute the cost measures. A detailed design and analysis of a two-dimensional (2-D) optoelectronic shuffle-exchange routing network with variable grain size K is presented. The architecture permits the conventional 2 x 2 switches or grains to be generalized to larger K x K grain sizes by replacing optical interconnects with electronic wires without affecting the functionality of the system. Thus the system consists of log(k) N optoelectronic stages interconnected with free-space K-shuffles. When K = N, the MIN consists of a single electronic stage with optical input-output. The system design use an effi ient 2-D VLSI layout and a single diffractive optical element between stages to provide the 2-D K-shuffle interconnection. Results indicate that there is an optimum range of grain sizes that provides the best performance per cost. For the specific VLSI/GaAs multiple quantum well technology and system architecture considered, grain sizes larger than 256 x 256 result in a reduced performance, while grain sizes smaller than 16 x 16 have a high cost. For a network with 4096 channels, the useful range of grain sizes corresponds to approximately 250-400 electronic transistors per optical input-output channel. The effect of varying certain technology parameters such as the number of hologram phase levels, the modulator driving voltage, the minimum detectable power, and VLSI minimum feature size on the optimum grain-size system is studied. For instance, results show that using four phase levels for the interconnection hologram is a good compromise for the cost functions mentioned above. As VLSI minimum feature sizes decrease, the optimum grain size increases, whereas, if optical interconnect performance in terms of the detector power or modulator driving voltage requirements improves, the optimum grain size may be reduced. Finally, several architectural modifications to the system, such as K x K contention-free switches and sorting networks, are investigated and optimized for grain size. Results indicate that system bandwidth can be increased, but at the price of reduced performance/cost. The optoelectronic MIN architectures considered thus provide a broad range of performance/cost alternatives and offer a superior performance over purely electronic MIN's.
NASA Technical Reports Server (NTRS)
Landano, M. R.; Easter, R. W.
1984-01-01
Aspects of Space Station automated systems testing and verification are discussed, taking into account several program requirements. It is found that these requirements lead to a number of issues of uncertainties which require study and resolution during the Space Station definition phase. Most, if not all, of the considered uncertainties have implications for the overall testing and verification strategy adopted by the Space Station Program. A description is given of the Galileo Orbiter fault protection design/verification approach. Attention is given to a mission description, an Orbiter description, the design approach and process, the fault protection design verification approach/process, and problems of 'stress' testing.
NASA Technical Reports Server (NTRS)
1989-01-01
The design and verification requirements are defined which are appropriate to hardware at the detail, subassembly, component, and engine levels and to correlate these requirements to the development demonstrations which provides verification that design objectives are achieved. The high pressure fuel turbopump requirements verification matrix provides correlation between design requirements and the tests required to verify that the requirement have been met.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
Module generation for self-testing integrated systems
NASA Astrophysics Data System (ADS)
Vanriessen, Ronald Pieter
Hardware used for self test in VLSI (Very Large Scale Integrated) systems is reviewed, and an architecture to control the test hardware in an integrated system is presented. Because of the increase of test times, the use of self test techniques has become practically and economically viable for VLSI systems. Beside the reduction in test times and costs, self test also provides testing at operational speeds. Therefore, a suitable combination of scan path and macrospecific (self) tests is required to reduce test times and costs. An expert system that can be used in a silicon compilation environment is presented. The approach requires a minimum of testability knowledge from a system designer. A user friendly interface was described for specifying and modifying testability requirements by a testability expert. A reason directed backtracking mechanism is used to solve selection failures. Both the hierarchical testable architecture and the design for testability expert system are used in a self test compiler. The definition of a self test compiler was given. A self test compiler is a software tool that selects an appropriate test method for every macro in a design. The hardware to control a macro test will be included in the design automatically. As an example, the integration of the self-test compiler in a silicon compilation system PIRAMID was described. The design of a demonstrator circuit by self test compiler is described. This circuit consists of two self testable macros. Control of the self test hardware is carried out via the test access port of the boundary scan standard.
Systolic VLSI Reed-Solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.
1986-01-01
Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.
Fault Tolerance for VLSI Multicomputers
1985-08-01
that consists of hundreds or thousands of VLSI computation nodes interconnected by dedicated links. Some important applications of high-end computers...technology, and intended applications . A proposed fault tolerance scheme combines hardware that performs error detection and system-level protocols for...order to recover from the error and resume correct operation, a valid system state must be restored. A low-overhead, application -transparent error
VLSI Microsystem for Rapid Bioinformatic Pattern Recognition
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Lue, Jaw-Chyng
2009-01-01
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
A novel configurable VLSI architecture design of window-based image processing method
NASA Astrophysics Data System (ADS)
Zhao, Hui; Sang, Hongshi; Shen, Xubang
2018-03-01
Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure
Study of techniques for redundancy verification without disrupting systems, phases 1-3
NASA Technical Reports Server (NTRS)
1970-01-01
The problem of verifying the operational integrity of redundant equipment and the impact of a requirement for verification on such equipment are considered. Redundant circuits are examined and the characteristics which determine adaptability to verification are identified. Mutually exclusive and exhaustive categories for verification approaches are established. The range of applicability of these techniques is defined in terms of signal characteristics and redundancy features. Verification approaches are discussed and a methodology for the design of redundancy verification is developed. A case study is presented which involves the design of a verification system for a hypothetical communications system. Design criteria for redundant equipment are presented. Recommendations for the development of technological areas pertinent to the goal of increased verification capabilities are given.
Property-driven functional verification technique for high-speed vision system-on-chip processor
NASA Astrophysics Data System (ADS)
Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian
2017-04-01
The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.
Fault-Tolerant Sequencer Using FPGA-Based Logic Designs for Space Applications
2013-12-01
Prototype Board SBU single bit upset SDK software development kit SDRAM synchronous dynamic random-access memory SEB single-event burnout ...current VHDL VHSIC hardware description language VHSIC very-high-speed integrated circuits VLSI very-large- scale integration VQFP very...transient pulse, called a single-event transient (SET), or even cause permanent damage to the device in the form of a burnout or gate rupture. The SEE
A Design Methodology for Optoelectronic VLSI
2007-01-01
current gets converted to a CMOS voltage level through a transimpedance amplifier circuit called a receiver. The output of the receiver is then...change the current flowing from the diode to a voltage that the logic inputs can use. That circuit is called a receiver. It is a transimpedance amplifier ...incorpo- rate random access memory circuits, SRAM or dynamic RAM (DRAM). These circuits use weak internal analog signals that are amplified by sense
1984-01-01
software, or using a local area network of personal computers. Since the hardware is not designed around the algorithms, improvements in the sequential...Hall, 1982. [4] Robinson, RW., and N.C. Wormald. Numbers of Cubic Graphs. Journa [5] Lane, T. Carnegie-Mellon University. Personal communication, 1/31...34Amdahl’s constant." It is not entirely clear why commercial machines have stayed close to this value, but market forces appear to have played an
A Coherent VLSI Design Environment
1987-03-31
experimentally on realistic problems. U In the area of parallel algorithms and architectures, Prof. Leighton and Briic= Maggs are developing efficient...performance penalty. The flexibility is particularly important in an experimental machine. For example, we can redefine system messages such as ’SEND’ or...Theorem -- What It Says, Why It’s True , and Some of the Things It Predicts," Department of Computer Science, California Insti- tute of Technology
NASA Astrophysics Data System (ADS)
Brodersen, R. W.
1984-04-01
A scaled version of the RISC II chip has been fabricated and tested and these new chips have a cycle time that would outperform a VAX 11/780 by about a factor of two on compiled integer C programs. The architectural work on a RISC chip designed for a Smalltalk implementation has been completed. This chip, called SOAR (Smalltalk On a RISC), should run program s4-15 times faster than the Xerox 1100 (Dolphin), a TTL minicomputer, and about as fast as the Xerox 1132 (Dorado), a $100,000 ECL minicomputer. The 1983 VLSI tools tape has been converted for use under the latest UNIX release (4.2). The Magic (formerly called Caddy) layout system will be a unified set of highly automated tools that cover all aspects of the layout process, including stretching, compaction, tiling and routing. A multiple window package and design rule checker for this system have just been completed and compaction and stretching are partially implemented. New slope-based timing models for the Crystal timing analyzer are now fully implemented and in regular use. In an accuracy test using a dozen critical paths from the RISC II processor and cache chips it was found that Crystal's estimates were within 5-10% of SPICE's estimates, while being a factor of 10,000 times faster.
NOVA: A new multi-level logic simulator
NASA Technical Reports Server (NTRS)
Miles, L.; Prins, P.; Cameron, K.; Shovic, J.
1990-01-01
A new logic simulator that was developed at the NASA Space Engineering Research Center for VLSI Design was described. The simulator is multi-level, being able to simulate from the switch level through the functional model level. NOVA is currently in the Beta test phase and was used to simulate chips designed for the NASA Space Station and the Explorer missions. A new algorithm was devised to simulate bi-directional pass transistors and a preliminary version of the algorithm is presented. The usage of functional models in NOVA is also described and performance figures are presented.
46 CFR 61.40-3 - Design verification testing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...
Implementation of a VLSI Level Zero Processing system utilizing the functional component approach
NASA Technical Reports Server (NTRS)
Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.
1991-01-01
A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.
A new VLSI architecture for a single-chip-type Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.
1989-01-01
A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.
Compilation of Abstracts of Theses Submitted by Candidates for Degrees.
1986-09-30
Musitano, J.R. Fin-line Horn Antennas 118 LCDR, USNR Muth, L.R. VLSI Tutorials Through the 119 LT, USN Video -computer Courseware Implementation...Engineer Allocation 432 CPT, USA Model Kiziltan, M. Cognitive Performance Degrada- 433 LTJG, Turkish Navy tion on Sonar Operator and Tor- pedo Data...and Computer Engineering 118 VLSI TUTORIALS THROUGH THE VIDEO -COMPUTER COURSEWARE IMPLEMENTATION SYSTEM Liesel R. Muth Lieutenant, United States Navy
VLSI Based Multiprocessor Communications Networks.
1982-09-01
year of the contract. Research plans for year three are also presented. Need for a research effort in the area of VLSI based communication networks... plans for year three of the contract. Section 4 concludes with a summary discussion of the research thus far. A number of appendices follow the main...pin constraints. We plan to investigate some -12- of these issues during the coming year in addition to developing similar models and bandwidth
Dante, V; Del Giudice, P; Mattia, M
2001-01-01
We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.
A VLSI Neural Monitoring System With Ultra-Wideband Telemetry for Awake Behaving Subjects.
Greenwald, E; Mollazadeh, M; Hu, C; Wei Tang; Culurciello, E; Thakor, V
2011-04-01
Long-term monitoring of neuronal activity in awake behaving subjects can provide fundamental information about brain dynamics for neuroscience and neuroengineering applications. Here, we present a miniature, lightweight, and low-power recording system for monitoring neural activity in awake behaving animals. The system integrates two custom designed very-large-scale integrated chips, a neural interface module fabricated in 0.5 μm complementary metal-oxide semiconductor technology and an ultra-wideband transmitter module fabricated in a 0.5 μm silicon-on-sapphire (SOS) technology. The system amplifies, filters, digitizes, and transmits 16 channels of neural data at a rate of 1 Mb/s. The entire system, which includes the VLSI circuits, a digital interface board, a battery, and a custom housing, is small and lightweight (24 g) and, thus, can be chronically mounted on small animals. The system consumes 4.8 mA and records continuously for up to 40 h powered by a 3.7-V, 200-mAh rechargeable lithium-ion battery. Experimental benchtop characterizations as well as in vivo multichannel neural recordings from awake behaving rats are presented here.
1987-11-01
developed that can be used by circuit engineers to extract the maximum performance from the devices on various board technologies including multilayer ceramic...Design guidelines have been developed that can be used by circuit engineers to extract the maxi- mum performance from the devices on various board...25 Attenuation and Dispersion Effects ......................................... 27 Skin Effect
Trusted Fabrication through 3D Integration
2017-03-01
contiguous and thus identifiable. The concept of a “smart partitioner” is introduced for a second experiment. Keywords: Trusted Fab ; VLSI; 3DIC...to the fabrication facility. One solution is the split- fab concept in which the design is split into two separate fabs early in the metal stack, and...possible solution is proposed herein whereby a three chip stack is formed, two built in normal semiconductor fabs and one in an interposer fab . This
Modeling and Simulation of a Signal Processor Implementing the Winograd Fourier Transform.
1985-12-01
advisor, Captain Richard Linderman, for the gui- dance and timely remotivation needed to ensure successful completion of this research . The members...of the WFT research group, Captains Paul Coutee, Paul Rosssbach, and Kent Taylor provided a much needed source of answers to the many questions I had... research is directed toward analysis of VHDL as a tool useful in VLSI design. This analysis covered learning the language syntax, development of a
A New Interface Specification Methodology and its Application to Transducer Synthesis
1988-05-01
structural, and physical. Within each domain descriptive methods are distinguished by the level of abstraction they emphasize. The Gajski -Kuhn Y...4.2. The Gajski -Kuhn Y-chart’s three axes correspond to three different domains for describing designs: behavioral, structural, and physical. The...Gajski83] D. Gajski , R. Kuhn, Guest Editors’ Introduction: New VLSI Tools, IEEE Computer, Vol. 16, No. 12, December 1983. [Girczyc85] E. Girczyc, R
High performance MPEG-audio decoder IC
NASA Technical Reports Server (NTRS)
Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.
1993-01-01
The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.
NASA Technical Reports Server (NTRS)
Smith, Edwyn D.
1991-01-01
Two silicon CMOS application specific integrated circuits (ASICs), a data generation chip, and a data checker chip were designed. The conversion of the data generator circuitry into a pair of CMOS ASIC chips using the 1.2 micron standard cell library is documented. The logic design of the data checker is discussed. The functions of the control circuitry is described. An accurate estimate of timing relationships is essential to make sure that the logic design performs correctly under practical conditions. Timing and delay information are examined.
An evaluation of the directed flow graph methodology
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Rajala, S. A.
1984-01-01
The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.
Exact Algorithms for Output Encoding, State Assignment and Four-Level Boolean Minimization
1989-10-01
APPROVED FOR PUBLIC DISTRIBUTION • DTIC MASSACHUSETTS INTITUTE OF TECHNOLOGY M VLSI PUBLICATIONSJAN 17 1990 VLSI Memo No. 89-569 JN. 9October 1989...nunijize large funclions exacly within reasonable amocunt. of CPt targeting twro-level logic imnplemientations involve finding ap- time. However, thle ,, m ...0(NV!) m ~iimizations . n5 10 The inptut encoding problemt can be exactly solved using mrultiple-valued Boolean nimuization. We present an exact (a) (b
Leak detection utilizing analog binaural (VLSI) techniques
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor)
1995-01-01
A detection method and system utilizing silicon models of the traveling wave structure of the human cochlea to spatially and temporally locate a specific sound source in the presence of high noise pandemonium. The detection system combines two-dimensional stereausis representations, which are output by at least three VLSI binaural hearing chips, to generate a three-dimensional stereausis representation including both binaural and spectral information which is then used to locate the sound source.
Devices and Systems for Nonlinear Optical Information Processing
1988-11-01
in the VLSI literature [7, 8, 9], in which basic physical principles have been invoked to both understand current VLSI performance and to project...the first time, that in fact accounts for a very wide range of observed but previously unexplained phenomena [Appendix 4; AFOSR Jour. Publ. 7, AFOSR...the variable grating mode liquid crystal device A. R. Tongay. Jr. Abstract. The physical principles of operation of the variable grating mode C. S. Wu
Tunable multi-wavelength fiber lasers based on an Opto-VLSI processor and optical amplifiers.
Xiao, Feng; Alameh, Kamal; Lee, Yong Tak
2009-12-07
A multi-wavelength tunable fiber laser based on the use of an Opto-VLSI processor in conjunction with different optical amplifiers is proposed and experimentally demonstrated. The Opto-VLSI processor can simultaneously select any part of the gain spectrum from each optical amplifier into its associated fiber ring, leading to a multiport tunable fiber laser source. We experimentally demonstrate a 3-port tunable fiber laser source, where each output wavelength of each port can independently be tuned within the C-band with a wavelength step of about 0.05 nm. Experimental results demonstrate a laser linewidth as narrow as 0.05 nm and an optical side-mode-suppression-ratio (SMSR) of about 35 dB. The demonstrated three fiber lasers have excellent stability at room temperature and output power uniformity less than 0.5 dB over the whole C-band.
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
30 CFR 250.913 - When must I resubmit Platform Verification Program plans?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication verification, or installation verification... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When must I resubmit Platform Verification...
Development of real-time software environments for NASA's modern telemetry systems
NASA Technical Reports Server (NTRS)
Horner, Ward; Sabia, Steve
1989-01-01
An effort has been made to maintain maximum performance and flexibility for NASA-Goddard's VLSI telemetry system elements through the development of two real-time systems: (1) the Base System Environment, which supports generic system integration and furnishes the basic porting of various manufacturers' cards, and (2) the Modular Environment for Data Systems, which supports application-specific developments and furnishes designers with a set of tested generic library functions that can be employed to speed up the development of such application-specific real-time codes. The performance goals and design rationale for these two systems are discussed.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
The VLSI design of the sub-band filterbank in MP3 decoding
NASA Astrophysics Data System (ADS)
Liu, Jia-Xin; Luo, Li
2018-03-01
The sub-band filterbank is one of the most important modules which has the largest amount of calculation in MP3 decoding. In order to save CPU resources and integrate the sub-band filterbank part into MP3 IP core, the hardware circuit of the sub-band filterbank module is designed in this paper. A fast algorithm suit for hardware implementation is proposed and achieved on FPGA development board. The results show that the sub-band filterbank function is correct in the case of using very few registers and the amount of calculation and ROM resources are reduced greatly.
1990-09-01
Monterey, California 93943-5000 Monterey, California 93943-5000 8a NAME OF OjNYNG SPONSORNc Br Oc.(C S VBO_ 9 POCAE’ ,S’ jN1N DE NT CA (’% . ORGANIZATON (If...position of the Depart- ment of Defense or the US Government. ś COSA I CODL> 18 S,,BjECT TERMS (Continue on reverse if necessar dno idenritj b blck...logic for which it was de - signed. Finally, the circuit should retain correct function- ality over time by having stable operating characteristics. If
Modular Matrix Multiplication on a Linear Array.
1983-11-01
is fl(n2). 2 Case e Irl __ (see Figure 5.2) 2 2 ,1 Y, " X2v- ’ Y2 -. x= -- ~ Y4 "i; Yin Figure 5Ŗ At t--xi, either all Gk, such that IkEA , have n...nat and Image Proceuing, IEEE Transactions on Computers, Vol. C-31, No. 10 22 (October, 1982), pp. IO0oo09. [41 H.T. Kung, Let’s Design Algorithms for...VLSI Systems, Proc. Caltech Conf. on Very Large Scale Integration: Architecture, Design , Fabrication (January, 1979), pp. 65. 90. 151 H.T. Kung, and
Towards the formal verification of the requirements and design of a processor interface unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
The formal verification of the design and partial requirements for a Processor Interface Unit (PIU) using the Higher Order Logic (HOL) theorem-proving system is described. The processor interface unit is a single-chip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. It provides the opportunity to investigate the specification and verification of a real-world subsystem within a commercially-developed fault-tolerant computer. An overview of the PIU verification effort is given. The actual HOL listing from the verification effort are documented in a companion NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings' including the general-purpose HOL theories and definitions that support the PIU verification as well as tactics used in the proofs.
Block QCA Fault-Tolerant Logic Gates
NASA Technical Reports Server (NTRS)
Firjany, Amir; Toomarian, Nikzad; Modarres, Katayoon
2003-01-01
Suitably patterned arrays (blocks) of quantum-dot cellular automata (QCA) have been proposed as fault-tolerant universal logic gates. These block QCA gates could be used to realize the potential of QCA for further miniaturization, reduction of power consumption, increase in switching speed, and increased degree of integration of very-large-scale integrated (VLSI) electronic circuits. The limitations of conventional VLSI circuitry, the basic principle of operation of QCA, and the potential advantages of QCA-based VLSI circuitry were described in several NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35; and Hybrid VLSI/QCA Architecture for Computing FFTs (NPO-20923), which follows this article. To recapitulate the principle of operation (greatly oversimplified because of the limitation on space available for this article): A quantum-dot cellular automata contains four quantum dots positioned at or between the corners of a square cell. The cell contains two extra mobile electrons that can tunnel (in the quantummechanical sense) between neighboring dots within the cell. The Coulomb repulsion between the two electrons tends to make them occupy antipodal dots in the cell. For an isolated cell, there are two energetically equivalent arrangements (denoted polarization states) of the extra electrons. The cell polarization is used to encode binary information. Because the polarization of a nonisolated cell depends on Coulomb-repulsion interactions with neighboring cells, universal logic gates and binary wires could be constructed, in principle, by arraying QCA of suitable design in suitable patterns. Heretofore, researchers have recognized two major obstacles to realization of QCA-based logic gates: One is the need for (and the difficulty of attaining) operation of QCA circuitry at room temperature or, for that matter, at any temperature above a few Kelvins. It has been theorized that room-temperature operation could be made possible by constructing QCA as molecular-scale devices. However, in approaching the lower limit of miniaturization at the molecular level, it becomes increasingly imperative to overcome the second major obstacle, which is the need for (and the difficulty of attaining) high precision in the alignments of adjacent QCA in order to ensure the correct interactions among the quantum dots.
Product assurance technology for custom LSI/VLSI electronics
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Blaes, B. R.; Jennings, G. A.; Moore, B. T.; Nixon, R. H.; Pina, C. A.; Sayah, H. R.; Sievers, M. W.; Stahlberg, N. F.
1985-01-01
The technology for obtaining custom integrated circuits from CMOS-bulk silicon foundries using a universal set of layout rules is presented. The technical efforts were guided by the requirement to develop a 3 micron CMOS test chip for the Combined Release and Radiation Effects Satellite (CRRES). This chip contains both analog and digital circuits. The development employed all the elements required to obtain custom circuits from silicon foundries, including circuit design, foundry interfacing, circuit test, and circuit qualification.
Analog hardware implementation of neocognitron networks
NASA Astrophysics Data System (ADS)
Inigo, Rafael M.; Bonde, Allen, Jr.; Holcombe, Bradford
1990-08-01
This paper deals with the analog implementation of neocognitron based neural networks. All of Fukushima''s and related work on the neocognitron is based on digital computer simulations. To fully take advantage of the power of this network paradigm an analog electronic approach is proposed. We first implemented a 6-by-6 sensor network with discrete analog components and fixed weights. The network was given weight values to recognize the characters U L and F. These characters are recognized regardless of their location on the sensor and with various levels of distortion and noise. The network performance has also shown an excellent correlation with software simulation results. Next we implemented a variable weight network which can be trained to recognize simple patterns by means of self-organization. The adaptable weights were implemented with PETs configured as voltage-controlled resistors. To implement a variable weight there must be some type of " memory" to store the weight value and hold it while the value is reinforced or incremented. Two methods were evaluated: an analog sample-hold circuit and a digital storage scheme using binary counters. The latter is preferable for VLSI implementation because it uses standard components and does not require the use of capacitors. The analog design and implementation of these small-scale networks demonstrates the feasibility of implementing more complicated ANNs in electronic hardware. The circuits developed can also be designed for VLSI implementation. 1.
A VLSI decomposition of the deBruijn graph
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Mceliece, R.; Pollara, F.
1990-01-01
A new Viterbi decoder for convolutional codes with constraint lengths up to 15, called the Big Viterbi Decoder, is under development for the Deep Space Network. It will be demonstrated by decoding data from the Galileo spacecraft, which has a rate 1/4, constraint-length 15 convolutional encoder on board. Here, the mathematical theory underlying the design of the very-large-scale-integrated (VLSI) chips that are being used to build this decoder is explained. The deBruijn graph B sub n describes the topology of a fully parallel, rate 1/v, constraint length n+2 Viterbi decoder, and it is shown that B sub n can be built by appropriately wiring together (i.e., connecting together with extra edges) many isomorphic copies of a fixed graph called a B sub n building block. The efficiency of such a building block is defined as the fraction of the edges in B sub n that are present in the copies of the building block. It is shown, among other things, that for any alpha less than 1, there exists a graph G which is a B sub n building block of efficiency greater than alpha for all sufficiently large n. These results are illustrated by describing a special hierarchical family of deBruijn building blocks, which has led to the design of the gate-array chips being used in the Big Viterbi Decoder.
A study of applications scribe frame data verifications using design rule check
NASA Astrophysics Data System (ADS)
Saito, Shoko; Miyazaki, Masaru; Sakurai, Mitsuo; Itoh, Takahisa; Doi, Kazumasa; Sakurai, Norioko; Okada, Tomoyuki
2013-06-01
In semiconductor manufacturing, scribe frame data generally is generated for each LSI product according to its specific process design. Scribe frame data is designed based on definition tables of scanner alignment, wafer inspection and customers specified marks. We check that scribe frame design is conforming to specification of alignment and inspection marks at the end. Recently, in COT (customer owned tooling) business or new technology development, there is no effective verification method for the scribe frame data, and we take a lot of time to work on verification. Therefore, we tried to establish new verification method of scribe frame data by applying pattern matching and DRC (Design Rule Check) which is used in device verification. We would like to show scheme of the scribe frame data verification using DRC which we tried to apply. First, verification rules are created based on specifications of scanner, inspection and others, and a mark library is also created for pattern matching. Next, DRC verification is performed to scribe frame data. Then the DRC verification includes pattern matching using mark library. As a result, our experiments demonstrated that by use of pattern matching and DRC verification our new method can yield speed improvements of more than 12 percent compared to the conventional mark checks by visual inspection and the inspection time can be reduced to less than 5 percent if multi-CPU processing is used. Our method delivers both short processing time and excellent accuracy when checking many marks. It is easy to maintain and provides an easy way for COT customers to use original marks. We believe that our new DRC verification method for scribe frame data is indispensable and mutually beneficial.
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
Critical Problems in Very Large Scale Computer Systems
1989-03-31
253-6043 Srinivas Devadas (617) 253-0454 Thomas F. Knight, Jr. (617) 253-7807 F. Thomson Leighton (617) 253-3662 Charles E. Leiserson (617) 253-5833...VLSI Memo No. 88-477, October 1988. S. Devadas , "General Decomposition of Sequential Machines: Relationships to State Assignment," to appear in...Perspective, C. Hewitt and G. Agha editors, MIT Press, 1989. Also MIT VLSI Memo No. 88-491, December 1988. * T. Leighton, B . Maggs, and S. Rao, "Universal
New dynamic FET logic and serial memory circuits for VLSI GaAs technology
NASA Technical Reports Server (NTRS)
Eldin, A. G.
1991-01-01
The complexity of GaAs field effect transistor (FET) very large scale integration (VLSI) circuits is limited by the maximum power dissipation while the uniformity of the device parameters determines the functional yield. In this work, digital GaAs FET circuits are presented that eliminate the DC power dissipation and reduce the area to 50% of that of the conventional static circuits. Its larger tolerance to device parameter variations results in higher functional yield.
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
Dynamic testing for shuttle design verification
NASA Technical Reports Server (NTRS)
Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.
1972-01-01
Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.
Design and Realization of Controllable Ultrasonic Fault Detector Automatic Verification System
NASA Astrophysics Data System (ADS)
Sun, Jing-Feng; Liu, Hui-Ying; Guo, Hui-Juan; Shu, Rong; Wei, Kai-Li
The ultrasonic flaw detection equipment with remote control interface is researched and the automatic verification system is developed. According to use extensible markup language, the building of agreement instruction set and data analysis method database in the system software realizes the controllable designing and solves the diversification of unreleased device interfaces and agreements. By using the signal generator and a fixed attenuator cascading together, a dynamic error compensation method is proposed, completes what the fixed attenuator does in traditional verification and improves the accuracy of verification results. The automatic verification system operating results confirms that the feasibility of the system hardware and software architecture design and the correctness of the analysis method, while changes the status of traditional verification process cumbersome operations, and reduces labor intensity test personnel.
Dynamically-allocated multi-queue buffers for VLSI communication switches
NASA Technical Reports Server (NTRS)
Tamir, Yuval; Frazier, Gregory L.
1992-01-01
Several buffer structures are discussed and compared in terms of implementation complexity, interswitch handshaking requirements, and their ability to deal with variations in traffic patterns and message lengths. A new design of buffers is presented that provide non-FIFO message handling and efficient storage allocation for variable size packets using linked lists managed by a simple on-chip controller. The new buffer design is evaluated by comparing it to several alternative designs in the context of a multistage interconnection network. The present modeling and simulations show that the new buffer outperforms alternative buffers and can thus be used to improve the performance of a wide variety of systems currently using less efficient buffers.
Distributed genetic algorithms for the floorplan design problem
NASA Technical Reports Server (NTRS)
Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.
1991-01-01
Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.
Cluster man/system design requirements and verification. [for Skylab program
NASA Technical Reports Server (NTRS)
Watters, H. H.
1974-01-01
Discussion of the procedures employed for determining the man/system requirements that guided Skylab design, and review of the techniques used for implementing the man/system design verification. The foremost lesson learned from the design need anticipation and design verification experience is the necessity to allow for human capabilities of in-flight maintenance and repair. It is now known that the entire program was salvaged by a series of unplanned maintenance and repair events which were implemented in spite of poor design provisions for maintenance.
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
High density circuit technology, part 3
NASA Technical Reports Server (NTRS)
Wade, T. E.
1982-01-01
Dry processing - both etching and deposition - and present/future trends in semiconductor technology are discussed. In addition to a description of the basic apparatus, terminology, advantages, glow discharge phenomena, gas-surface chemistries, and key operational parameters for both dry etching and plasma deposition processes, a comprehensive survey of dry processing equipment (via vendor listing) is also included. The following topics are also discussed: fine-line photolithography, low-temperature processing, packaging for dense VLSI die, the role of integrated optics, and VLSI and technology innovations.
NASA Astrophysics Data System (ADS)
Kim, Cheol-kyun; Kim, Jungchan; Choi, Jaeseung; Yang, Hyunjo; Yim, Donggyu; Kim, Jinwoong
2007-03-01
As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.
1984-06-01
programming environment and then dumped, as described in the Franz Lisp manual [Ref. 13]. A synopsis of the functional elements which make up this LISP...the average system usage rate. Lines i4 and 15 reflect a function of Franz Lisp wherein past used storage locations are reclaimed for the available... Franz Lisp Opus 38. Also included in this distribu- tion are two library files containing the bonding Fad a layouts in CIF, and a library file
VLSI (Very Large Scale Integrated Circuits) Design with the MacPitts Silicon Compiler.
1985-09-01
the background. If the algorithm is not fully debugged, then issue instead macpitts basename herald so MacPitts diagnostics and Liszt diagnostics both...command interpreter. Upon compilation, however, the following LI!F compiler ( Liszt ) diagnostic results, Error: Non-number to minus nil where the first...language used in the MacPitts source code. The more instructive solution is to write the Franz LISP code to decide if a jumper wire is needed, and if so, to
VLSI Design Tools, Reference Manual, Release 2.0.
1984-08-01
eder. 2.3 ITACV: Libary ofC readne. far oesumdg a layoit 1-,, tiling. V ~2.4 "QUILT: CeinS"Wbesa-i-M-8euar ray f atwok til 2.5 "TIL: Tockmeleff...8217patterns package was added so that complex and repetitive digital waveforms could be generated far more easily. The recently written program MTP (Multiple...circuit model to estimate timing delays through digital circuits. It also has a mode that allows it to be used as a switch (gate) level simulator
On the Design of VLSI Circuits for the Winograd Fourier Transform Algorithm
1991-12-01
3-:3 T’able 8: Twiddle factors in TF1 (real side) ... .. .. .. .. .... ... .. .. ...- 4 T;l1)a1e 9: Twiddle factors I n... TF1 (imaginary side) .. .. .. .. ... ... .... .. 13-5 ’fable 10: Twiddle factors in TF2 (r’eal side) ... .. .. .. .. .... ... .... .. 13-6 ’Table 11...reads its twiddle factor from Y. The four other possibilities (TFO, TF1 , TF2, and TF3) correspond to the fixed values that are necessary for computing 20
1989-01-20
addressable memory can be loaded or off- loaded as the number crunching continues. Modem VLSI processors can often process data faster than today’s...Available DSP Chips Texas Instruments was one of the first serious manufacturers of DSP chips. With the Texas Instruments TMS310 DSP chip, modem , voice...Can handle double presicion data types. Texas Instruments TMS32010 T’s first-generation DSP design: a fixed-point DSP that has found its way into modem
Fault Tolerant VLSI Design Assessments for Advanced Avionics Department
1982-02-06
negative sense. Another facet of the literature review is to acquaint the researchers with the immense literature base for electronic technology applicable ...Report: Semiconductor Memories are Tested Over Data-Storage Application ", Electronics, vol. 46, August 19. G. Luecke, J. P. Mlize and W. N. Carr...Semiconductor Memories, Desi-n and Application , New York, McGraw iLiii, 1973. 20. P, A. Lee, N. Ghani and K. Heron, "A Recovery Cache for the PDP-lI" Digest
VLSI Design, Parallel Computation and Distributed Computing
1991-09-30
I U1 TA 3 Daniel Mleitman U. : C ..( -_. .. .s .. . . . . Tom Leighton David Shmoys . ........A ,~i ;.t , 77 Michael Sipser , Di.,t a-., Eva Tardos...Leighton and Plaxton on the construction of a sim- ple c log .- depth circuit (where c < 7.5) that sorts a random permutation with very high probability...puting iPOD( ). Aug-ust 1992. Vancouver. British Columbia (to appear). 20. B 1Xti~ c .. U(.ii. 1. Gopal. M. [Kaplan and S. Kutten, "Distributed Control for
Using Software Technology to Specify Abstract Interfaces in VLSI Design.
1985-01-01
the information required for the specification should be provided "and nothir- g more." He felt that a chief source of specification failure was...this complexity control objective by mix i g two potentially separable kinds of information, i.e. functional/electrical and scheduling semantics...strength Syntactic: immaterial PINS Name AccFn C/D scim G -Scanln C scout PScanOut D routbar PDataOut D p )sh SShift D p2shbar SHold D plshbar S
37 CFR 262.7 - Verification of royalty payments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Designated Agent have agreed as to proper verification methods. (b) Frequency of verification. A Copyright Owner or a Performer may conduct a single audit of the Designated Agent upon reasonable notice and... COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES RATES AND TERMS FOR CERTAIN ELIGIBLE...
Recent patents on Cu/low-k dielectrics interconnects in integrated circuits.
Jiang, Qing; Zhu, Yong F; Zhao, Ming
2007-01-01
In past decades, the development of microelectronics has moved along with constant speed of scaling to maximize transistor density as driven by the need for electrical and functional performance. For further development, the propagation velocity of electromagnetic waves becomes increasingly important due to their unyielding constraints on interconnect delay. To minimize it, it was forced to the introduction of the Cu/low-k dielectric interconnects to very large scale integrated circuits (VLSI) where k denotes the dielectric constant. In addition, reliable barrier structures, which are the thinnest part among the device parts to maximize space availability for the actual Cu IWs, are required to prevent penetration of different materials. In light of the above statements, this review will focus recent patents and some studies on Cu interconnects including Cu interconnect wires, low-k dielectrics and related barrier materials as well manufacturing techniques in VLSI, which are one of the most essential concerns in microelectronic industry and decides the further development of VLSI. In addition, possible future development in this field is considered.
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Bickford, Mark
1991-01-01
The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
The scientific data acquisition system of the GAMMA-400 space project
NASA Astrophysics Data System (ADS)
Bobkov, S. G.; Serdin, O. V.; Gorbunov, M. S.; Arkhangelskiy, A. I.; Topchiev, N. P.
2016-02-01
The description of scientific data acquisition system (SDAS) designed by SRISA for the GAMMA-400 space project is presented. We consider the problem of different level electronics unification: the set of reliable fault-tolerant integrated circuits fabricated on Silicon-on-Insulator 0.25 mkm CMOS technology and the high-speed interfaces and reliable modules used in the space instruments. The characteristics of reliable fault-tolerant very large scale integration (VLSI) technology designed by SRISA for the developing of computation systems for space applications are considered. The scalable net structure of SDAS based on Serial RapidIO interface including real-time operating system BAGET is described too.
Fault-tolerant computer study. [logic designs for building block circuits
NASA Technical Reports Server (NTRS)
Rennels, D. A.; Avizienis, A. A.; Ercegovac, M. D.
1981-01-01
A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed.
Robust Bioinformatics Recognition with VLSI Biochip Microsystem
NASA Technical Reports Server (NTRS)
Lue, Jaw-Chyng L.; Fang, Wai-Chi
2006-01-01
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
Simulation environment based on the Universal Verification Methodology
NASA Astrophysics Data System (ADS)
Fiergolski, A.
2017-01-01
Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.
On verifying a high-level design. [cost and error analysis
NASA Technical Reports Server (NTRS)
Mathew, Ben; Wehbeh, Jalal A.; Saab, Daniel G.
1993-01-01
An overview of design verification techniques is presented, and some of the current research in high-level design verification is described. Formal hardware description languages that are capable of adequately expressing the design specifications have been developed, but some time will be required before they can have the expressive power needed to be used in real applications. Simulation-based approaches are more useful in finding errors in designs than they are in proving the correctness of a certain design. Hybrid approaches that combine simulation with other formal design verification techniques are argued to be the most promising over the short term.
30 CFR 250.913 - When must I resubmit Platform Verification Program plans?
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication... 30 Mineral Resources 2 2011-07-01 2011-07-01 false When must I resubmit Platform Verification...
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-03-01
With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
Content-addressable read/write memories for image analysis
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Savage, C. D.
1982-01-01
The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.
Debris control design achievements of the booster separation motors
NASA Technical Reports Server (NTRS)
Smith, G. W.; Chase, C. A.
1985-01-01
The stringent debris control requirements imposed on the design of the Space Shuttle booster separation motor are described along with the verification program implemented to ensure compliance with debris control objectives. The principal areas emphasized in the design and development of the Booster Separation Motor (BSM) relative to debris control were the propellant formulation and nozzle closures which protect the motors from aerodynamic heating and moisture. A description of the motor design requirements, the propellant formulation and verification program, and the nozzle closures design and verification are presented.
NASA Astrophysics Data System (ADS)
Papers are presented on local area networks; formal methods for communication protocols; computer simulation of communication systems; spread spectrum and coded communications; tropical radio propagation; VLSI for communications; strategies for increasing software productivity; multiple access communications; advanced communication satellite technologies; and spread spectrum systems. Topics discussed include Space Station communication and tracking development and design; transmission networks; modulation; data communications; computer network protocols and performance; and coding and synchronization. Consideration is given to free space optical communications systems; VSAT communication networks; network topology design; advances in adaptive filtering echo cancellation and adaptive equalization; advanced signal processing for satellite communications; the elements, design, and analysis of fiber-optic networks; and advances in digital microwave systems.
30 CFR 285.705 - When must I use a Certified Verification Agent (CVA)?
Code of Federal Regulations, 2010 CFR
2010-07-01
... INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Facility Design, Fabrication, and Installation Certified Verification Agent § 285.705 When must I use a Certified Verification Agent (CVA)? You must use a CVA to review and certify the Facility Design Report, the...
30 CFR 250.909 - What is the Platform Verification Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What is the Platform Verification Program? 250... Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms; platforms of a new or unique design...
NASA Technical Reports Server (NTRS)
1986-01-01
Activities that will be conducted in support of the development and verification of the Block 2 Solid Rocket Motor (SRM) are described. Development includes design, fabrication, processing, and testing activities in which the results are fed back into the project. Verification includes analytical and test activities which demonstrate SRM component/subassembly/assembly capability to perform its intended function. The management organization responsible for formulating and implementing the verification program is introduced. It also identifies the controls which will monitor and track the verification program. Integral with the design and certification of the SRM are other pieces of equipment used in transportation, handling, and testing which influence the reliability and maintainability of the SRM configuration. The certification of this equipment is also discussed.
NASA Astrophysics Data System (ADS)
Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert
2017-08-01
Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.
Survey of Verification and Validation Techniques for Small Satellite Software Development
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2015-01-01
The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1993-01-01
This technical report contains the Higher-Order Logic (HOL) listings of the partial verification of the requirements and design for a commercially developed processor interface unit (PIU). The PIU is an interface chip performing memory interface, bus interface, and additional support services for a commercial microprocessor within a fault tolerant computer system. This system, the Fault Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. This report contains the actual HOL listings of the PIU verification as it currently exists. Section two of this report contains general-purpose HOL theories and definitions that support the PIU verification. These include arithmetic theories dealing with inequalities and associativity, and a collection of tactics used in the PIU proofs. Section three contains the HOL listings for the completed PIU design verification. Section 4 contains the HOL listings for the partial requirements verification of the P-Port.
Testing interconnected VLSI circuits in the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.
Orientation-selective aVLSI spiking neurons.
Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R
2001-01-01
We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
Greenwald, Elliot; Masters, Matthew R; Thakor, Nitish V
2016-01-01
A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very large-scale integration has advanced the design of complex integrated circuits. System-on-chip devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems.
Bidirectional Neural Interfaces
Masters, Matthew R.; Thakor, Nitish V.
2016-01-01
A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very-large-scale integration (VLSI) has advanced the design of complex integrated circuits. System-on-chip (SoC) devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems. PMID:26753776
Computer Aided Design of Integrated Circuit Fabrication Processes for VLSI Devices
1981-07-01
observed in the growth kinetics on N -type and P -type samples whether (100)- or (111)-oriented. Fig. 5 (100) and 6 (111) show the oxidation rate obtained...9, and 10 show Vfb vs. xox for (100), (111), N -type and P -type samples. The values of@MS obtained in the different cases are shown in Fig. 11, where...voltage vs. oxide thickness for (100) N -type wafers oxidized at 1000 0C and fast pulled from 02 ambient. FLATBAND VOLTAGE vs OXIDE THICKNESS -0.2 (100), P
A pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun
1988-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
Design of a high-speed digital processing element for parallel simulation
NASA Technical Reports Server (NTRS)
Milner, E. J.; Cwynar, D. S.
1983-01-01
A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.
Handbook: Design of automated redundancy verification
NASA Technical Reports Server (NTRS)
Ford, F. A.; Hasslinger, T. W.; Moreno, F. J.
1971-01-01
The use of the handbook is discussed and the design progress is reviewed. A description of the problem is presented, and examples are given to illustrate the necessity for redundancy verification, along with the types of situations to which it is typically applied. Reusable space vehicles, such as the space shuttle, are recognized as being significant in the development of the automated redundancy verification problem.
22 CFR 123.14 - Import certificate/delivery verification procedure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... REGULATIONS LICENSES FOR THE EXPORT OF DEFENSE ARTICLES § 123.14 Import certificate/delivery verification procedure. (a) The Import Certificate/Delivery Verification Procedure is designed to assure that a commodity... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Import certificate/delivery verification...
22 CFR 123.14 - Import certificate/delivery verification procedure.
Code of Federal Regulations, 2011 CFR
2011-04-01
... REGULATIONS LICENSES FOR THE EXPORT OF DEFENSE ARTICLES § 123.14 Import certificate/delivery verification procedure. (a) The Import Certificate/Delivery Verification Procedure is designed to assure that a commodity... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Import certificate/delivery verification...
Vlsi implementation of flexible architecture for decision tree classification in data mining
NASA Astrophysics Data System (ADS)
Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak
2017-07-01
The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.
Software development for airborne radar
NASA Astrophysics Data System (ADS)
Sundstrom, Ingvar G.
Some aspects for development of software in a modern multimode airborne nose radar are described. First, an overview of where software is used in the radar units is presented. The development phases-system design, functional design, detailed design, function verification, and system verification-are then used as the starting point for the discussion. Methods, tools, and the most important documents are described. The importance of video flight recording in the early stages and use of a digital signal generators for performance verification is emphasized. Some future trends are discussed.
Real-time qualitative reasoning for telerobotic systems
NASA Technical Reports Server (NTRS)
Pin, Eancois G.
1993-01-01
This paper discusses the sensor-based telerobotic driving of a car in a-priori unknown environments using 'human-like' reasoning schemes implemented on custom-designed VLSI fuzzy inferencing boards. These boards use the Fuzzy Set theoretic framework to allow very vast (30 kHz) processing of full sets of information that are expressed in qualitative form using membership functions. The sensor-based and fuzzy inferencing system was incorporated on an outdoor test-bed platform to investigate two control modes for driving a car on the basis of very sparse and imprecise range data. In the first mode, the car navigates fully autonomously to a goal specified by the operator, while in the second mode, the system acts as a telerobotic driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right, speed up, slow down, stop, or back up depending on the obstacles perceived by the sensors. Indoor and outdoor experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Sample results are presented that illustrate the feasibility of developing autonomous navigation modules and robust, safety-enhancing driver's aids for telerobotic systems using the new fuzzy inferencing VLSI hardware and 'human-like' reasoning schemes.
ELIPS: Toward a Sensor Fusion Processor on a Chip
NASA Technical Reports Server (NTRS)
Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James
1998-01-01
The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.
Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang
2016-12-07
The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.
Seismic design verification of LMFBR structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-07-01
The report provides an assessment of the seismic design verification procedures currently used for nuclear power plant structures, a comparison of dynamic test methods available, and conclusions and recommendations for future LMFB structures.
2017-08-01
comparable with MARATHON 1 in terms of output. Rather, the MARATHON 2 verification cases were designed to ensure correct implementation of the new algorithms...DISCLAIMER The findings of this report are not to be construed as an official Department of the Army position, policy, or decision unless so designated by...for employment against demands. This study is a comparative verification of the functionality of MARATHON 4 (our newest implementation of MARATHON
NASA Technical Reports Server (NTRS)
1975-01-01
The findings are presented of investigations on concepts and techniques in automated performance verification. The investigations were conducted to provide additional insight into the design methodology and to develop a consolidated technology base from which to analyze performance verification design approaches. Other topics discussed include data smoothing, function selection, flow diagrams, data storage, and shuttle hydraulic systems.
Mixed-mode VLSI optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Barrows, Geoffrey Louis
We develop practical, compact optic flow sensors. To achieve the desired weight of 1--2 grams, mixed-mode and mixed-signal VLSI techniques are used to develop compact circuits that directly perform computations necessary to measure optic flow. We discuss several implementations, including a version fully integrated in VLSI, and several "hybrid sensors" in which the front end processing is performed with an analog chip and the back end processing is performed with a microcontroller. We extensively discuss one-dimensional optic flow sensors based on the linear competitive feature tracker (LCFT) algorithm. Hardware implementations of this algorithm are shown able to measure visual motion with contrast levels on the order of several percent. We argue that the development of one-dimensional optic flow sensors is therefore reduced to a problem of engineering. We also introduce two related two-dimensional optic flow algorithms that are amenable to implementation in VLSI. This includes the planar competitive feature tracker (PCFT) algorithm and the trajectory method. These sensors are being developed to solve small-scale navigation problems in micro air vehicles, which are autonomous aircraft whose maximum dimension is on the order of 15 cm. We obtain a proof-of-principle of small-scale navigation by mounting a prototype sensor onto a toy glider and programming the sensor to control a rudder or an elevator to affect the glider's path during flight. We demonstrate the determination of altitude by measuring optic flow in the downward direction. We also demonstrate steering to avoid a collision with a wall, when the glider is tossed towards the wall at a shallow angle, by measuring the optic flow in the direction of the glider's left and right side.
High performance VLSI telemetry data systems
NASA Technical Reports Server (NTRS)
Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.
1990-01-01
NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-07-01
With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
The Environmental Technology Verification Program, established by the EPA, is designed to accelerate the development and commercialization of new or improved technologies through third-party verification and reporting of performance.
Design and Verification of Critical Pressurised Windows for Manned Spaceflight
NASA Astrophysics Data System (ADS)
Lamoure, Richard; Busto, Lara; Novo, Francisco; Sinnema, Gerben; Leal, Mendes M.
2014-06-01
The Window Design for Manned Spaceflight (WDMS) project was tasked with establishing the state-of-art and explore possible improvements to the current structural integrity verification and fracture control methodologies for manned spacecraft windows.A critical review of the state-of-art in spacecraft window design, materials and verification practice was conducted. Shortcomings of the methodology in terms of analysis, inspection and testing were identified. Schemes for improving verification practices and reducing conservatism whilst maintaining the required safety levels were then proposed.An experimental materials characterisation programme was defined and carried out with the support of the 'Glass and Façade Technology Research Group', at the University of Cambridge. Results of the sample testing campaign were analysed, post-processed and subsequently applied to the design of a breadboard window demonstrator.Two Fused Silica glass window panes were procured and subjected to dedicated analyses, inspection and testing comprising both qualification and acceptance programmes specifically tailored to the objectives of the activity.Finally, main outcomes have been compiled into a Structural Verification Guide for Pressurised Windows in manned spacecraft, incorporating best practices and lessons learned throughout this project.
ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM FOR MONITORING AND CHARACTERIZATION
The Environmental Technology Verification Program is a service of the Environmental Protection Agency designed to accelerate the development and commercialization of improved environmental technology through third party verification and reporting of performance. The goal of ETV i...
Off-line, built-in test techniques for VLSI circuits
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Sievers, M. W.
1982-01-01
It is shown that the use of redundant on-chip circuitry improves the testability of an entire VLSI circuit. In the study described here, five techniques applied to a two-bit ripple carry adder are compared. The techniques considered are self-oscillation, self-comparison, partition, scan path, and built-in logic block observer. It is noted that both classical stuck-at faults and nonclassical faults, such as bridging faults (shorts), stuck-on x faults where x may be 0, 1, or vary between the two, and parasitic flip-flop faults occur in IC structures. To simplify the analysis of the testing techniques, however, a stuck-at fault model is assumed.
Modulation and coding for satellite and space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.
1990-01-01
Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.
A VLSI architecture for simplified arithmetic Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
A VLSI architecture for performing finite field arithmetic with reduced table look-up
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Reed, I. S.
1986-01-01
A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.
A Compact VLSI System for Bio-Inspired Visual Motion Estimation.
Shi, Cong; Luo, Gang
2018-04-01
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
Electronic neural network for dynamic resource allocation
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Eberhardt, S. P.; Daud, T.
1991-01-01
A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
Hardware acceleration and verification of systems designed with hardware description languages (HDL)
NASA Astrophysics Data System (ADS)
Wisniewski, Remigiusz; Wegrzyn, Marek
2005-02-01
Hardware description languages (HDLs) allow creating bigger and bigger designs nowadays. The size of prototyped systems very often exceeds million gates. Therefore verification process of the designs takes several hours or even days. The solution for this problem can be solved by hardware acceleration of simulation.
A Design Rationale Capture Tool to Support Design Verification and Re-use
NASA Technical Reports Server (NTRS)
Hooey, Becky Lee; Da Silva, Jonny C.; Foyle, David C.
2012-01-01
A design rationale tool (DR tool) was developed to capture design knowledge to support design verification and design knowledge re-use. The design rationale tool captures design drivers and requirements, and documents the design solution including: intent (why it is included in the overall design); features (why it is designed the way it is); information about how the design components support design drivers and requirements; and, design alternatives considered but rejected. For design verification purposes, the tool identifies how specific design requirements were met and instantiated within the final design, and which requirements have not been met. To support design re-use, the tool identifies which design decisions are affected when design drivers and requirements are modified. To validate the design tool, the design knowledge from the Taxiway Navigation and Situation Awareness (T-NASA; Foyle et al., 1996) system was captured and the DR tool was exercised to demonstrate its utility for validation and re-use.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
.... SUPPLEMENTARY INFORMATION: RI 38-107, Verification of Who is Getting Payments, is designed for use by the... OFFICE OF PERSONNEL MANAGEMENT Submission for Review: Verification of Who Is Getting Payments, RI... currently approved information collection request (ICR) 3206-0197, Verification of Who is Getting Payments...
Crew Exploration Vehicle (CEV) Avionics Integration Laboratory (CAIL) Independent Analysis
NASA Technical Reports Server (NTRS)
Davis, Mitchell L.; Aguilar, Michael L.; Mora, Victor D.; Regenie, Victoria A.; Ritz, William F.
2009-01-01
Two approaches were compared to the Crew Exploration Vehicle (CEV) Avionics Integration Laboratory (CAIL) approach: the Flat-Sat and Shuttle Avionics Integration Laboratory (SAIL). The Flat-Sat and CAIL/SAIL approaches are two different tools designed to mitigate different risks. Flat-Sat approach is designed to develop a mission concept into a flight avionics system and associated ground controller. The SAIL approach is designed to aid in the flight readiness verification of the flight avionics system. The approaches are complimentary in addressing both the system development risks and mission verification risks. The following NESC team findings were identified: The CAIL assumption is that the flight subsystems will be matured for the system level verification; The Flat-Sat and SAIL approaches are two different tools designed to mitigate different risks. The following NESC team recommendation was provided: Define, document, and manage a detailed interface between the design and development (EDL and other integration labs) to the verification laboratory (CAIL).
Ada(R) Test and Verification System (ATVS)
NASA Technical Reports Server (NTRS)
Strelich, Tom
1986-01-01
The Ada Test and Verification System (ATVS) functional description and high level design are completed and summarized. The ATVS will provide a comprehensive set of test and verification capabilities specifically addressing the features of the Ada language, support for embedded system development, distributed environments, and advanced user interface capabilities. Its design emphasis was on effective software development environment integration and flexibility to ensure its long-term use in the Ada software development community.
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
NASA Astrophysics Data System (ADS)
McKellip, Rodney; Yuan, Ding; Graham, William; Holland, Donald E.; Stone, David; Walser, William E.; Mao, Chengye
1997-06-01
The number of available spaceborne and airborne systems will dramatically increase over the next few years. A common systematic approach toward verification of these systems will become important for comparing the systems' operational performance. The Commercial Remote Sensing Program at the John C. Stennis Space Center (SSC) in Mississippi has developed design requirements for a remote sensing verification target range to provide a means to evaluate spatial, spectral, and radiometric performance of optical digital remote sensing systems. The verification target range consists of spatial, spectral, and radiometric targets painted on a 150- by 150-meter concrete pad located at SSC. The design criteria for this target range are based upon work over a smaller, prototypical target range at SSC during 1996. This paper outlines the purpose and design of the verification target range based upon an understanding of the systems to be evaluated as well as data analysis results from the prototypical target range.
NASA Technical Reports Server (NTRS)
1984-01-01
Automation reuirements were developed for two manufacturing concepts: (1) Gallium Arsenide Electroepitaxial Crystal Production and Wafer Manufacturing Facility, and (2) Gallium Arsenide VLSI Microelectronics Chip Processing Facility. A functional overview of the ultimate design concept incoporating the two manufacturing facilities on the space station are provided. The concepts were selected to facilitate an in-depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, sensors, and artificial intelligence. While the cost-effectiveness of these facilities was not analyzed, both appear entirely feasible for the year 2000 timeframe.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
Formal Models of Hardware and Their Applications to VLSI Design Automation.
1986-12-24
ORGANIZATION Universitv of Southern’iaplcbe ralifnrni Offico of ’,aval "esearch 6c. ADDRESS (City. State and ZIP Code) 7b. ADDRESS (City. Stote and ZIP Code...Di’f-i2C-33-K-O147 8.ADESS IXity, State and ZIP Coda, 10 SOURCE OF FUNDING NODS US fr-," esearch C-f-ice PORM POET TS OKUI 2..Fc 2~1ELEMENT No NO. NO...are classified as belonging to one of six different types. The dimensions of the routing channel are defined as functions of these random variables
Techniques for video compression
NASA Technical Reports Server (NTRS)
Wu, Chwan-Hwa
1995-01-01
In this report, we present our study on multiprocessor implementation of a MPEG2 encoding algorithm. First, we compare two approaches to implementing video standards, VLSI technology and multiprocessor processing, in terms of design complexity, applications, and cost. Then we evaluate the functional modules of MPEG2 encoding process in terms of their computation time. Two crucial modules are identified based on this evaluation. Then we present our experimental study on the multiprocessor implementation of the two crucial modules. Data partitioning is used for job assignment. Experimental results show that high speedup ratio and good scalability can be achieved by using this kind of job assignment strategy.
NASA Astrophysics Data System (ADS)
Abraham, S. J.
While Avionics Intermediate Shops (AISs) have in the past been required for military aircraft, the emerging VLSI/VHSIC technology has given rise to the possibility of novel, well partitioned avionics system architectures that obviate the high spare parts costs that formerly prompted and justified the existence of an AIS. Future avionics may therefore be adequately and economically supported by a two-level maintenance system. Algebraic generalizations are presented for the analysis of the spares costs implications of alternative design partitioning schemes for future avionics.
Study of a programmable high speed processor for use on-board satellites
NASA Astrophysics Data System (ADS)
Degavre, J. Cl.; Okkes, R.; Gaillat, G.
The availability of VLSI programmable devices will significantly enhance satellite on-board data processing capabilities. A case study is presented which indicates that computation-intensive processing applications requiring the execution of 100 megainstructions/sec are within the CD power constraints of satellites. It is noted that the current progress in semicustom design technique development and in achievable gate array densities, together with the recent announcement of improved monochip processors, are encouraging the development of an on-board programmable processor architecture able to associate the devices that will appear in communication and military markets.
Advanced flight computer. Special study
NASA Technical Reports Server (NTRS)
Coo, Dennis
1995-01-01
This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.
Integrating Model-Based Verification into Software Design Education
ERIC Educational Resources Information Center
Yilmaz, Levent; Wang, Shuo
2005-01-01
Proper design analysis is indispensable to assure quality and reduce emergent costs due to faulty software. Teaching proper design verification skills early during pedagogical development is crucial, as such analysis is the only tractable way of resolving software problems early when they are easy to fix. The premise of the presented strategy is…
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
The language parallel Pascal and other aspects of the massively parallel processor
NASA Technical Reports Server (NTRS)
Reeves, A. P.; Bruner, J. D.
1982-01-01
A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.
NASA Technical Reports Server (NTRS)
1984-01-01
The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.
The Environmental Technology Verification (ETV) Program, established by the U.S. EPA, is designed to accelerate the developmentand commercialization of new or improved technologies through third-party verification and reporting of performance. The Air Pollution Control Technology...
NASA Technical Reports Server (NTRS)
Davis, Robert E.
2002-01-01
The presentation provides an overview of requirement and interpretation letters, mechanical systems safety interpretation letter, design and verification provisions, and mechanical systems verification plan.
Design Authority in the Test Programme Definition: The Alenia Spazio Experience
NASA Astrophysics Data System (ADS)
Messidoro, P.; Sacchi, E.; Beruto, E.; Fleming, P.; Marucchi Chierro, P.-P.
2004-08-01
In addition, being the Verification and Test Programme a significant part of the spacecraft development life cycle in terms of cost and time, very often the subject of the mentioned discussion has the objective to optimize the verification campaign by possible deletion or limitation of some testing activities. The increased market pressure to reduce the project's schedule and cost is originating a dialecting process inside the project teams, involving program management and design authorities, in order to optimize the verification and testing programme. The paper introduces the Alenia Spazio experience in this context, coming from the real project life on different products and missions (science, TLC, EO, manned, transportation, military, commercial, recurrent and one-of-a-kind). Usually the applicable verification and testing standards (e.g. ECSS-E-10 part 2 "Verification" and ECSS-E-10 part 3 "Testing" [1]) are tailored to the specific project on the basis of its peculiar mission constraints. The Model Philosophy and the associated verification and test programme are defined following an iterative process which suitably combines several aspects (including for examples test requirements and facilities) as shown in Fig. 1 (from ECSS-E-10). The considered cases are mainly oriented to the thermal and mechanical verification, where the benefits of possible test programme optimizations are more significant. Considering the thermal qualification and acceptance testing (i.e. Thermal Balance and Thermal Vacuum) the lessons learned originated by the development of several satellites are presented together with the corresponding recommended approaches. In particular the cases are indicated in which a proper Thermal Balance Test is mandatory and others, in presence of more recurrent design, where a qualification by analysis could be envisaged. The importance of a proper Thermal Vacuum exposure for workmanship verification is also highlighted. Similar considerations are summarized for the mechanical testing with particular emphasis on the importance of Modal Survey, Static and Sine Vibration Tests in the qualification stage in combination with the effectiveness of Vibro-Acoustic Test in acceptance. The apparent relative importance of the Sine Vibration Test for workmanship verification in specific circumstances is also highlighted. Fig. 1. Model philosophy, Verification and Test Programme definition The verification of the project requirements is planned through a combination of suitable verification methods (in particular Analysis and Test) at the different verification levels (from System down to Equipment), in the proper verification stages (e.g. in Qualification and Acceptance).
NASA Astrophysics Data System (ADS)
Marconi, S.; Conti, E.; Christiansen, J.; Placidi, P.
2018-05-01
The operating conditions of the High Luminosity upgrade of the Large Hadron Collider are very demanding for the design of next generation hybrid pixel readout chips in terms of particle rate, radiation level and data bandwidth. To this purpose, the RD53 Collaboration has developed for the ATLAS and CMS experiments a dedicated simulation and verification environment using industry-consolidated tools and methodologies, such as SystemVerilog and the Universal Verification Methodology (UVM). This paper presents how the so-called VEPIX53 environment has first guided the design of digital architectures, optimized for processing and buffering very high particle rates, and secondly how it has been reused for the functional verification of the first large scale demonstrator chip designed by the collaboration, which has recently been submitted.
Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.
Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta
2012-01-01
Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.
Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System
Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta
2011-01-01
Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163
A microarchitecture for resource-limited superscalar microprocessors
NASA Astrophysics Data System (ADS)
Basso, Todd David
1999-11-01
Microelectronic components in space and satellite systems must be resistant to total dose radiation, single-even upset, and latchup in order to accomplish their missions. The demand for inexpensive, high-volume, radiation hardened (rad-hard) integrated circuits (ICs) is expected to increase dramatically as the communication market continues to expand. Motorola's Complementary Gallium Arsenide (CGaAsTM) technology offers superior radiation tolerance compared to traditional CMOS processes, while being more economical than dedicated rad-hard CMOS processes. The goals of this dissertation are to optimize a superscalar microarchitecture suitable for CGaAsTM microprocessors, develop circuit techniques for such applications, and evaluate the potential of CGaAsTM for the development of digital VLSI circuits. Motorola's 0.5 mum CGaAsTM process is summarized and circuit techniques applicable to digital CGaAsTM are developed. Direct coupled FET, complementary, and domino logic circuits are compared based on speed, power, area, and noise margins. These circuit techniques are employed in the design of a 600 MHz PowerPCTM arithmetic logic unit. The dissertation emphasizes CGaASTM-specific design considerations, specifically, low integration level. A baseline superscalar microarchitecture is defined and SPEC95 integer benchmark simulations are used to evaluate the applicability of advanced architectural features to microprocessors having low integration levels. The performance simulations center around the optimization of a simple superscalar core, small-scale branch prediction, instruction prefetching, and an off-chip primary data cache. The simulation results are used to develop a superscalar microarchitecture capable of outperforming a comparable sequential pipeline, while using only 500,000 transistors. The architecture, running at 200 MHz, is capable of achieving an estimated 153 MIPS, translating to a 27% performance increase over a comparable traditional pipelined microprocessor. The proposed microarchitecture is process independent and can be applied to low-cost, or transistor-limited applications. The proposed microarchitecture is implemented in the design of a 0.35 mum CMOS microprocessor, and the design of a 0.5 mum CGaAsTM micro-processor. The two technologies and designs are compared to ascertain the state of CGaAsTM for digital VLSI applications.
The Environmental Technology Verification (ETV) Program, established by the U.S. EPA, is designed to accelerate the development and commercialization of new or improved technologies through third-party verification and reporting of performance. The Air Pollution Control Technolog...
Cascaded VLSI Chips Help Neural Network To Learn
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher; Thakoor, Anilkumar P.
1993-01-01
Cascading provides 12-bit resolution needed for learning. Using conventional silicon chip fabrication technology of VLSI, fully connected architecture consisting of 32 wide-range, variable gain, sigmoidal neurons along one diagonal and 7-bit resolution, electrically programmable, synaptic 32 x 31 weight matrix implemented on neuron-synapse chip. To increase weight nominally from 7 to 13 bits, synapses on chip individually cascaded with respective synapses on another 32 x 32 matrix chip with 7-bit resolution synapses only (without neurons). Cascade correlation algorithm varies number of layers effectively connected into network; adds hidden layers one at a time during learning process in such way as to optimize overall number of neurons and complexity and configuration of network.
NASA Astrophysics Data System (ADS)
Johnson, W. N.; Herrick, W. V.; Grundmann, W. J.
1984-10-01
For the first time, VLSI technology is used to compress the full functinality and comparable performance of the VAX 11/780 super-minicomputer into a 1.2 M transistor microprocessor chip set. There was no subsetting of the 304 instruction set and the 17 data types, nor reduction in hardware support for the 4 Gbyte virtual memory management architecture. The chipset supports an integral 8 kbyte memory cache, a 13.3 Mbyte/s system bus, and sophisticated multiprocessing. High performance is achieved through microcode optimizations afforded by the large control store, tightly coupled address and data caches, the use of internal and external 32 bit datapaths, the extensive aplication of both microlevel and macrolevel pipelining, and the use of specialized hardware assists.
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
Carvajal, Gonzalo; Figueroa, Miguel
2014-07-01
Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Systematic Model-in-the-Loop Test of Embedded Control Systems
NASA Astrophysics Data System (ADS)
Krupp, Alexander; Müller, Wolfgang
Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2012 CFR
2012-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2014 CFR
2014-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2013 CFR
2013-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
49 CFR 236.905 - Railroad Safety Program Plan (RSPP).
Code of Federal Regulations, 2011 CFR
2011-10-01
... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...
Mahdiani, Hamid Reza; Fakhraie, Sied Mehdi; Lucas, Caro
2012-08-01
Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
A neural network device for on-line particle identification in cosmic ray experiments
NASA Astrophysics Data System (ADS)
Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G. C.
2004-05-01
On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification.
What is the Final Verification of Engineering Requirements?
NASA Technical Reports Server (NTRS)
Poole, Eric
2010-01-01
This slide presentation reviews the process of development through the final verification of engineering requirements. The definition of the requirements is driven by basic needs, and should be reviewed by both the supplier and the customer. All involved need to agree upon a formal requirements including changes to the original requirements document. After the requirements have ben developed, the engineering team begins to design the system. The final design is reviewed by other organizations. The final operational system must satisfy the original requirements, though many verifications should be performed during the process. The verification methods that are used are test, inspection, analysis and demonstration. The plan for verification should be created once the system requirements are documented. The plan should include assurances that every requirement is formally verified, that the methods and the responsible organizations are specified, and that the plan is reviewed by all parties. The options of having the engineering team involved in all phases of the development as opposed to having some other organization continue the process once the design has been complete is discussed.
Software Tools for Formal Specification and Verification of Distributed Real-Time Systems
1994-07-29
time systems and to evaluate the design. The evaluation of the design includes investigation of both the capability and potential usefulness of the toolkit environment and the feasibility of its implementation....The goals of Phase 1 are to design in detail a toolkit environment based on formal methods for the specification and verification of distributed real
Verified compilation of Concurrent Managed Languages
2017-11-01
designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A
Software Tools for Formal Specification and Verification of Distributed Real-Time Systems.
1997-09-30
set of software tools for specification and verification of distributed real time systems using formal methods. The task of this SBIR Phase II effort...to be used by designers of real - time systems for early detection of errors. The mathematical complexity of formal specification and verification has
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
Joint ETV/NOWATECH test plan for the Sorbisense GSW40 passive sampler
The joint test plan is the implementation of a test design developed for verification of the performance of an environmental technology following the NOWATECH ETV method. The verification is a joint verification with the US EPA ETV scheme and the Advanced Monitoring Systems Cent...
JPL control/structure interaction test bed real-time control computer architecture
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1989-01-01
The Control/Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts - such as active structure - and new tools - such as combined structure and control optimization algorithm - and their verification in ground and possibly flight test. A focus mission spacecraft was designed based upon a space interferometer and is the basis for design of the ground test article. The ground test bed objectives include verification of the spacecraft design concepts, the active structure elements and certain design tools such as the new combined structures and controls optimization tool. In anticipation of CSI technology flight experiments, the test bed control electronics must emulate the computation capacity and control architectures of space qualifiable systems as well as the command and control networks that will be used to connect investigators with the flight experiment hardware. The Test Bed facility electronics were functionally partitioned into three units: a laboratory data acquisition system for structural parameter identification and performance verification; an experiment supervisory computer to oversee the experiment, monitor the environmental parameters and perform data logging; and a multilevel real-time control computing system. The design of the Test Bed electronics is presented along with hardware and software component descriptions. The system should break new ground in experimental control electronics and is of interest to anyone working in the verification of control concepts for large structures.
Neuromorphic Silicon Neuron Circuits
Indiveri, Giacomo; Linares-Barranco, Bernabé; Hamilton, Tara Julia; van Schaik, André; Etienne-Cummings, Ralph; Delbruck, Tobi; Liu, Shih-Chii; Dudek, Piotr; Häfliger, Philipp; Renaud, Sylvie; Schemmel, Johannes; Cauwenberghs, Gert; Arthur, John; Hynna, Kai; Folowosele, Fopefolu; Saighi, Sylvain; Serrano-Gotarredona, Teresa; Wijekoon, Jayawan; Wang, Yingxue; Boahen, Kwabena
2011-01-01
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips. PMID:21747754
NASA Astrophysics Data System (ADS)
Zhafirah Muhammad, Nurul; Harun, A.; Hambali, N. A. M. A.; Murad, S. A. Z.; Mohyar, S. N.; Isa, M. N.; Jambek, AB
2017-11-01
Increased demand in internet of thing (IOT) application based has inadvertently forced the move towards higher complexity of integrated circuit supporting SoC. Such spontaneous increased in complexity poses unequivocal complicated validation strategies. Hence, the complexity allows researchers to come out with various exceptional methodologies in order to overcome this problem. This in essence brings about the discovery of dynamic verification, formal verification and hybrid techniques. In reserve, it is very important to discover bugs at infancy of verification process in (SoC) in order to reduce time consuming and fast time to market for the system. Ergo, in this paper we are focusing on the methodology of verification that can be done at Register Transfer Level of SoC based on the AMBA bus design. On top of that, the discovery of others verification method called Open Verification Methodology (OVM) brings out an easier way in RTL validation methodology neither as the replacement for the traditional method yet as an effort for fast time to market for the system. Thus, the method called OVM is proposed in this paper as the verification method for larger design to avert the disclosure of the bottleneck in validation platform.
Modular hardware synthesis using an HDL. [Hardware Description Language
NASA Technical Reports Server (NTRS)
Covington, J. A.; Shiva, S. G.
1981-01-01
Although hardware description languages (HDL) are becoming more and more necessary to automated design systems, their application is complicated due to the difficulty in translating the HDL description into an implementable format, nonfamiliarity of hardware designers with high-level language programming, nonuniform design methodologies and the time and costs involved in transfering HDL design software. Digital design language (DDL) suffers from all of the above problems and in addition can only by synthesized on a complete system and not on its subparts, making it unsuitable for synthesis using standard modules or prefabricated chips such as those required in LSI or VLSI circuits. The present paper presents a method by which the DDL translator can be made to generate modular equations that will allow the system to be synthesized as an interconnection of lower-level modules. The method involves the introduction of a new language construct called a Module which provides for the separate translation of all equations bounded by it.
Requirement Assurance: A Verification Process
NASA Technical Reports Server (NTRS)
Alexander, Michael G.
2011-01-01
Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.
Experimental verification of a model of a two-link flexible, lightweight manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Huggins, James David
1988-01-01
Experimental verification is presented for an assumed modes model of a large, two link, flexible manipulator design and constructed in the School of Mechanical Engineering at Georgia Institute of Technology. The structure was designed to have typical characteristics of a lightweight manipulator.
The paper discusses the test design for environmental technology verification (ETV) of add-0n nitrogen oxides (NOx) control utilizing ozone injection. (NOTE: ETV is an EPA-established program to enhance domestic and international market acceptance of new or improved commercially...
Requirement Specifications for a Design and Verification Unit.
ERIC Educational Resources Information Center
Pelton, Warren G.; And Others
A research and development activity to introduce new and improved education and training technology into Bureau of Medicine and Surgery training is recommended. The activity, called a design and verification unit, would be administered by the Education and Training Sciences Department. Initial research and development are centered on the…
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
A long constraint length VLSI Viterbi decoder for the DSN
NASA Technical Reports Server (NTRS)
Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.
1988-01-01
A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.
Learning in Stochastic Bit Stream Neural Networks.
van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu
1996-08-01
This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd
Intelligent tracking techniques
NASA Astrophysics Data System (ADS)
Willett, T. J.; Abruzzo, J.; Zagardo, V.; Shipley, J.; Kossa, L.
1980-10-01
This is the fifth quarterly report under a contract to investigate the design, test, and implementation of a set of algorithms to perform intelligent tracking and intelligent homing on FLIR and TV imagery. The system concept was described. The problem of target aspect determination in support of aimpoint selection was analyzed. Sequences of 875 line FLIR data were extracted from the data base and an example of aspect determination for a maneuvering target in the presence of obscurations was presented. An example was also presented for close in homing (less than 500 meters) and the emergence of interior features, target movement, and scale changes. Hardware implementation in terms of VLSI/VHSIC chips was analyzed.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Divito, Ben L.
1992-01-01
The design and formal verification of the Reliable Computing Platform (RCP), a fault tolerant computing system for digital flight control applications is presented. The RCP uses N-Multiply Redundant (NMR) style redundancy to mask faults and internal majority voting to flush the effects of transient faults. The system is formally specified and verified using the Ehdm verification system. A major goal of this work is to provide the system with significant capability to withstand the effects of High Intensity Radiated Fields (HIRF).
Collective behavior of networks with linear (VLSI) integrate-and-fire neurons.
Fusi, S; Mattia, M
1999-04-01
We analyze in detail the statistical properties of the spike emission process of a canonical integrate-and-fire neuron, with a linear integrator and a lower bound for the depolarization, as often used in VLSI implementations (Mead, 1989). The spike statistics of such neurons appear to be qualitatively similar to conventional (exponential) integrate-and-fire neurons, which exhibit a wide variety of characteristics observed in cortical recordings. We also show that, contrary to current opinion, the dynamics of a network composed of such neurons has two stable fixed points, even in the purely excitatory network, corresponding to two different states of reverberating activity. The analytical results are compared with numerical simulations and are found to be in good agreement.
Periodic binary sequence generators: VLSI circuits considerations
NASA Technical Reports Server (NTRS)
Perlman, M.
1984-01-01
Feedback shift registers are efficient periodic binary sequence generators. Polynomials of degree r over a Galois field characteristic 2(GF(2)) characterize the behavior of shift registers with linear logic feedback. The algorithmic determination of the trinomial of lowest degree, when it exists, that contains a given irreducible polynomial over GF(2) as a factor is presented. This corresponds to embedding the behavior of an r-stage shift register with linear logic feedback into that of an n-stage shift register with a single two-input modulo 2 summer (i.e., Exclusive-OR gate) in its feedback. This leads to Very Large Scale Integrated (VLSI) circuit architecture of maximal regularity (i.e., identical cells) with intercell communications serialized to a maximal degree.
10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2
NASA Astrophysics Data System (ADS)
Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.
1985-02-01
An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.
NASA Technical Reports Server (NTRS)
Aanstoos, J. V.; Snyder, W. E.
1981-01-01
Anticipated major advances in integrated circuit technology in the near future are described as well as their impact on satellite onboard signal processing systems. Dramatic improvements in chip density, speed, power consumption, and system reliability are expected from very large scale integration. Improvements are expected from very large scale integration enable more intelligence to be placed on remote sensing platforms in space, meeting the goals of NASA's information adaptive system concept, a major component of the NASA End-to-End Data System program. A forecast of VLSI technological advances is presented, including a description of the Defense Department's very high speed integrated circuit program, a seven-year research and development effort.
78 FR 6849 - Agency Information Collection (Verification of VA Benefits) Activity Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-31
... (Verification of VA Benefits) Activity Under OMB Review AGENCY: Veterans Benefits Administration, Department of... ``OMB Control No. 2900-0406.'' SUPPLEMENTARY INFORMATION: Title: Verification of VA Benefits, VA Form 26... eliminate unlimited versions of lender- designed forms. The form also informs the lender whether or not the...
Design and Verification Guidelines for Vibroacoustic and Transient Environments
NASA Technical Reports Server (NTRS)
1986-01-01
Design and verification guidelines for vibroacoustic and transient environments contain many basic methods that are common throughout the aerospace industry. However, there are some significant differences in methodology between NASA/MSFC and others - both government agencies and contractors. The purpose of this document is to provide the general guidelines used by the Component Analysis Branch, ED23, at MSFC, for the application of the vibroacoustic and transient technology to all launch vehicle and payload components and payload components and experiments managed by NASA/MSFC. This document is intended as a tool to be utilized by the MSFC program management and their contractors as a guide for the design and verification of flight hardware.
Integrated testing and verification system for research flight software design document
NASA Technical Reports Server (NTRS)
Taylor, R. N.; Merilatt, R. L.; Osterweil, L. J.
1979-01-01
The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically.
Automated biowaste sampling system urine subsystem operating model, part 1
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Mangialardi, J. K.; Rosen, F.
1973-01-01
The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.
2013-04-01
project was to provide the Royal Canadian Navy ( RCN ) with a set of guidelines on analysis, design, and verification processes for effective room...design, and verification processes that should be used in the development of effective room layouts for Royal Canadian Navy ( RCN ) ships. The primary...designed CSC; however, the guidelines could be applied to the design of any multiple-operator space in any RCN vessel. Results: The development of
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
Fourth NASA Langley Formal Methods Workshop
NASA Technical Reports Server (NTRS)
Holloway, C. Michael (Compiler); Hayhurst, Kelly J. (Compiler)
1997-01-01
This publication consists of papers presented at NASA Langley Research Center's fourth workshop on the application of formal methods to the design and verification of life-critical systems. Topic considered include: Proving properties of accident; modeling and validating SAFER in VDM-SL; requirement analysis of real-time control systems using PVS; a tabular language for system design; automated deductive verification of parallel systems. Also included is a fundamental hardware design in PVS.
Verification testing of the Hydro-Kleen(TM) Filtration System, a catch-basin filter designed to reduce hydrocarbon, sediment, and metals contamination from surface water flows, was conducted at NSF International in Ann Arbor, Michigan. A Hydro-Kleen(TM) system was fitted into a ...
A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems
2016-03-01
insights gleaned to DoD. The autonomy community has identified significant challenges associated with test, evaluation verification and validation of...licensure as a test, evaluation, verification , and validation (TEVV) framework that can address these challenges. IDA found that traditional...language requirements to testable (preferably machine testable) specifications • Design of architectures that treat development and verification of
Verification of an on line in vivo semiconductor dosimetry system for TBI with two TLD procedures.
Sánchez-Doblado, F; Terrón, J A; Sánchez-Nieto, B; Arráns, R; Errazquin, L; Biggs, D; Lee, C; Núñez, L; Delgado, A; Muñiz, J L
1995-01-01
This work presents the verification of an on line in vivo dosimetry system based on semiconductors. Software and hardware has been designed to convert the diode signal into absorbed dose. Final verification was made in the form of an intercomparison with two independent thermoluminiscent (TLD) dosimetry systems, under TBI conditions.
Power efficient, clock gated multiplexer based full adder cell using 28 nm technology
NASA Astrophysics Data System (ADS)
Gupta, Ashutosh; Murgai, Shruti; Gulati, Anmol; Kumar, Pradeep
2016-03-01
Clock gating is a leading technique used for power saving. Full adders is one of the basic circuit that can be found in maximum VLSI circuits. In this paper clock gated multiplexer based full adder cell is implemented on 28 nm technology. We have designed a full adder cell using a multiplexer with a gated clock without degrading its performance of the cell. We have negative latch circuit for generating gated clock. This gated clock is used to control the multiplexer based full adder cell. The circuit has been synthesized on kintex FPGA through Xilinx ISE Design Suite 14.7 using 28 nm technology in Verilog HDL. The circuit has been simulated on Modelsim 10.3c. The design is verified using System Verilog on QuestaSim in UVM environment. The total power of the circuit has been reduced by 7.41% without degrading the performance of original circuit. The power has been calculated using XPower Analyzer tool of XILINX ISE DESIGN SUITE 14.3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, David A.
2012-08-16
Oak Ridge Associated Universities (ORAU) conducted in-process inspections and independent verification (IV) surveys in support of DOE's remedial efforts in Zone 1 of East Tennessee Technology Park (ETTP) in Oak Ridge, Tennessee. Inspections concluded that the remediation contractor's soil removal and survey objectives were satisfied and the dynamic verification strategy (DVS) was implemented as designed. Independent verification (IV) activities included gamma walkover surveys and soil sample collection/analysis over multiple exposure units (EUs).
2007-03-01
Characterisation. In Nanotechnology Aerospace Applications – 2006 (pp. 4-1 – 4-8). Educational Notes RTO-EN-AVT-129bis, Paper 4. Neuilly-sur-Seine, France: RTO...the Commercialisation Processes Concept IDEA Proof-of- Principle Trial Samples Engineering Verification Samples Design Verification Samples...SEIC Systems Engineering for commercialisation Design Houses, Engineering & R&D USERS & Integrators SE S U R Integrators Fabs & Wafer Processing Die
Space Station Furnace Facility. Volume 2: Appendix 1: Contract End Item specification (CEI), part 1
NASA Technical Reports Server (NTRS)
Seabrook, Craig
1992-01-01
This specification establishes the performance, design, development, and verification requirements for the Space Station Furnace Facility (SSFF) Core. The definition of the SSFF Core and its interfaces, specifies requirements for the SSFF Core performance, specifies requirements for the SSFF Core design, and construction are presented, and the verification requirements are established.
Integrated Formal Analysis of Timed-Triggered Ethernet
NASA Technical Reports Server (NTRS)
Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam
2012-01-01
We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.
NASA Astrophysics Data System (ADS)
Rieben, James C., Jr.
This study focuses on the effects of relevance and lab design on student learning within the chemistry laboratory environment. A general chemistry conductivity of solutions experiment and an upper level organic chemistry cellulose regeneration experiment were employed. In the conductivity experiment, the two main variables studied were the effect of relevant (or "real world") samples on student learning and a verification-based lab design versus a discovery-based lab design. With the cellulose regeneration experiment, the effect of a discovery-based lab design vs. a verification-based lab design was the sole focus. Evaluation surveys consisting of six questions were used at three different times to assess student knowledge of experimental concepts. In the general chemistry laboratory portion of this study, four experimental variants were employed to investigate the effect of relevance and lab design on student learning. These variants consisted of a traditional (or verification) lab design, a traditional lab design using "real world" samples, a new lab design employing real world samples/situations using unknown samples, and the new lab design using real world samples/situations that were known to the student. Data used in this analysis were collected during the Fall 08, Winter 09, and Fall 09 terms. For the second part of this study a cellulose regeneration experiment was employed to investigate the effects of lab design. A demonstration creating regenerated cellulose "rayon" was modified and converted to an efficient and low-waste experiment. In the first variant students tested their products and verified a list of physical properties. In the second variant, students filled in a blank physical property chart with their own experimental results for the physical properties. Results from the conductivity experiment show significant student learning of the effects of concentration on conductivity and how to use conductivity to differentiate solution types with the use of real world samples. In the organic chemistry experiment, results suggest that the discovery-based design improved student retention of the chain length differentiation by physical properties relative to the verification-based design.
A software engineering approach to expert system design and verification
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.; Goodwin, Mary Ann
1988-01-01
Software engineering design and verification methods for developing expert systems are not yet well defined. Integration of expert system technology into software production environments will require effective software engineering methodologies to support the entire life cycle of expert systems. The software engineering methods used to design and verify an expert system, RENEX, is discussed. RENEX demonstrates autonomous rendezvous and proximity operations, including replanning trajectory events and subsystem fault detection, onboard a space vehicle during flight. The RENEX designers utilized a number of software engineering methodologies to deal with the complex problems inherent in this system. An overview is presented of the methods utilized. Details of the verification process receive special emphasis. The benefits and weaknesses of the methods for supporting the development life cycle of expert systems are evaluated, and recommendations are made based on the overall experiences with the methods.
NASA Technical Reports Server (NTRS)
Bickford, Mark; Srivas, Mandayam
1991-01-01
Presented here is a formal specification and verification of a property of a quadruplicately redundant fault tolerant microprocessor system design. A complete listing of the formal specification of the system and the correctness theorems that are proved are given. The system performs the task of obtaining interactive consistency among the processors using a special instruction on the processors. The design is based on an algorithm proposed by Pease, Shostak, and Lamport. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, providing certain preconditions hold, using a computer aided design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
Research on key technology of the verification system of steel rule based on vision measurement
NASA Astrophysics Data System (ADS)
Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun
2018-01-01
The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.
Space transportation system payload interface verification
NASA Technical Reports Server (NTRS)
Everline, R. T.
1977-01-01
The paper considers STS payload-interface verification requirements and the capability provided by STS to support verification. The intent is to standardize as many interfaces as possible, not only through the design, development, test and evaluation (DDT and E) phase of the major payload carriers but also into the operational phase. The verification process is discussed in terms of its various elements, such as the Space Shuttle DDT and E (including the orbital flight test program) and the major payload carriers DDT and E (including the first flights). Five tools derived from the Space Shuttle DDT and E are available to support the verification process: mathematical (structural and thermal) models, the Shuttle Avionics Integration Laboratory, the Shuttle Manipulator Development Facility, and interface-verification equipment (cargo-integration test equipment).
Integrated Vertical Bloch Line (VBL) memory
NASA Technical Reports Server (NTRS)
Katti, R. R.; Wu, J. C.; Stadler, H. L.
1991-01-01
Vertical Bloch Line (VBL) Memory is a recently conceived, integrated, solid state, block access, VLSI memory which offers the potential of 1 Gbit/sq cm areal storage density, data rates of hundreds of megabits/sec, and submillisecond average access time simultaneously at relatively low mass, volume, and power values when compared to alternative technologies. VBLs are micromagnetic structures within magnetic domain walls which can be manipulated using magnetic fields from integrated conductors. The presence or absence of BVL pairs are used to store binary information. At present, efforts are being directed at developing a single chip memory using 25 Mbit/sq cm technology in magnetic garnet material which integrates, at a single operating point, the writing, storage, reading, and amplification functions needed in a memory. The current design architecture, functional elements, and supercomputer simulation results are described which are used to assist the design process.
A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.
1988-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.
Local bipolar-transistor gain measurement for VLSI devices
NASA Astrophysics Data System (ADS)
Bonnaud, O.; Chante, J. P.
1981-08-01
A method is proposed for measuring the gain of a bipolar transistor region as small as possible. The measurement then allows the evaluation particularly of the effect of the emitter-base junction edge and the technology-process influence of VLSI-technology devices. The technique consists in the generation of charge carriers in the transistor base layer by a focused laser beam in order to bias the device in as small a region as possible. To reduce the size of the conducting area, a transversal reverse base current is forced through the base layer resistance in order to pinch in the emitter current in the illuminated region. Transistor gain is deduced from small signal measurements. A model associated with this technique is developed, and this is in agreement with the first experimental results.
Post-OPC verification using a full-chip pattern-based simulation verification method
NASA Astrophysics Data System (ADS)
Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary
2005-11-01
In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.
Loads and Structural Dynamics Requirements for Spaceflight Hardware
NASA Technical Reports Server (NTRS)
Schultz, Kenneth P.
2011-01-01
The purpose of this document is to establish requirements relating to the loads and structural dynamics technical discipline for NASA and commercial spaceflight launch vehicle and spacecraft hardware. Requirements are defined for the development of structural design loads and recommendations regarding methodologies and practices for the conduct of load analyses are provided. As such, this document represents an implementation of NASA STD-5002. Requirements are also defined for structural mathematical model development and verification to ensure sufficient accuracy of predicted responses. Finally, requirements for model/data delivery and exchange are specified to facilitate interactions between Launch Vehicle Providers (LVPs), Spacecraft Providers (SCPs), and the NASA Technical Authority (TA) providing insight/oversight and serving in the Independent Verification and Validation role. In addition to the analysis-related requirements described above, a set of requirements are established concerning coupling phenomena or other interaction between structural dynamics and aerodynamic environments or control or propulsion system elements. Such requirements may reasonably be considered structure or control system design criteria, since good engineering practice dictates consideration of and/or elimination of the identified conditions in the development of those subsystems. The requirements are included here, however, to ensure that such considerations are captured in the design space for launch vehicles (LV), spacecraft (SC) and the Launch Abort Vehicle (LAV). The requirements in this document are focused on analyses to be performed to develop data needed to support structural verification. As described in JSC 65828, Structural Design Requirements and Factors of Safety for Spaceflight Hardware, implementation of the structural verification requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The requirement for and expected contents of the SVP are defined in JSC 65828. The SVP may also document unique verifications that meet or exceed these requirements with Technical Authority approval.
Design of verification platform for wireless vision sensor networks
NASA Astrophysics Data System (ADS)
Ye, Juanjuan; Shang, Fei; Yu, Chuang
2017-08-01
At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.
2013-10-01
its Verification in the Design and Testing of W-band Dual-Aspheric Lenses A. Altintas and V. Yurchenko EEE Department, Bilkent University Ankara...Theory and Techn., Vol. 55, 239, 2007 [5] ZEMAX Development Corporation, Zemax- EE , http://www.zemax.com/ [6] Pasqualini D. and Maci S., ”High-Frequency
ERIC Educational Resources Information Center
Wu, Peter Y.; Manohar, Priyadarshan A.; Acharya, Sushil
2016-01-01
It is well known that interesting questions can stimulate thinking and invite participation. Class exercises are designed to make use of questions to engage students in active learning. In a project toward building a community skilled in software verification and validation (SV&V), we critically review and further develop course materials in…
A sparse matrix algorithm on the Boolean vector machine
NASA Technical Reports Server (NTRS)
Wagner, Robert A.; Patrick, Merrell L.
1988-01-01
VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.
Integrated optical circuits for numerical computation
NASA Technical Reports Server (NTRS)
Verber, C. M.; Kenan, R. P.
1983-01-01
The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.
Real-time FPGA architectures for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
VLSI (Very Large Scale Integration) Design Tools Reference Manual - Release 1.0.
1983-10-01
Pulse PULSE(VI V2 TD TR TF PW PER) Examples: VIN 3 0 PULSE(-i 1 2NS 2NS 2NS SONS lOONS) parameter default units V1 (initial value) Volts or Amps V2...VO VA FREQ TD THETA) Examples: VIN 3 0 SIN(0 1 OOMEG 1NS 1EO) parameter default value units VO (offset) Volts or Amps VA (amplitude) Volts or Amps...TD to TSTOP V O+VAe (-(" -nTD)%)in(2iFRJEQ (tim +TD)) t ° I. 3. Exponential EXP(V1 V2 TD1 TAU1 TD2 TAU2) Examples: VIN 3 0 EXP(-4 -1 2NS 3ONS 6ONS
A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor
González, Diego; Botella, Guillermo; Meyer-Baese, Uwe; García, Carlos; Sanz, Concepción; Prieto-Matías, Manuel; Tirado, Francisco
2012-01-01
This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA) and NIOS II microprocessor applying a C to Hardware (C2H) acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI) technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system. PMID:23201989
The Maximal Oxygen Uptake Verification Phase: a Light at the End of the Tunnel?
Schaun, Gustavo Z
2017-12-08
Commonly performed during an incremental test to exhaustion, maximal oxygen uptake (V̇O 2max ) assessment has become a recurring practice in clinical and experimental settings. To validate the test, several criteria were proposed. In this context, the plateau in oxygen uptake (V̇O 2 ) is inconsistent in its frequency, reducing its usefulness as a robust method to determine "true" V̇O 2max . Moreover, secondary criteria previously suggested, such as expiratory exchange ratios or percentages of maximal heart rate, are highly dependent on protocol design and often are achieved at V̇O 2 percentages well below V̇O 2max . Thus, an alternative method termed verification phase was proposed. Currently, it is clear that the verification phase can be a practical and sensitive method to confirm V̇O 2max ; however, procedures to conduct it are not standardized across the literature and no previous research tried to summarize how it has been employed. Therefore, in this review the knowledge on the verification phase was updated, while suggestions on how it can be performed (e.g. intensity, duration, recovery) were provided according to population and protocol design. Future studies should focus to identify a verification protocol feasible for different populations and to compare square-wave and multistage verification phases. Additionally, studies assessing verification phases in different patient populations are still warranted.
Multi-canister overpack project -- verification and validation, MCNP 4A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldmann, L.H.
This supporting document contains the software verification and validation (V and V) package used for Phase 2 design of the Spent Nuclear Fuel Multi-Canister Overpack. V and V packages for both ANSYS and MCNP are included. Description of Verification Run(s): This software requires that it be compiled specifically for the machine it is to be used on. Therefore to facilitate ease in the verification process the software automatically runs 25 sample problems to ensure proper installation and compilation. Once the runs are completed the software checks for verification by performing a file comparison on the new output file and themore » old output file. Any differences between any of the files will cause a verification error. Due to the manner in which the verification is completed a verification error does not necessarily indicate a problem. This indicates that a closer look at the output files is needed to determine the cause of the error.« less
A DVE Time Management Simulation and Verification Platform Based on Causality Consistency Middleware
NASA Astrophysics Data System (ADS)
Zhou, Hangjun; Zhang, Wei; Peng, Yuxing; Li, Sikun
During the course of designing a time management algorithm for DVEs, the researchers always become inefficiency for the distraction from the realization of the trivial and fundamental details of simulation and verification. Therefore, a platform having realized theses details is desirable. However, this has not been achieved in any published work to our knowledge. In this paper, we are the first to design and realize a DVE time management simulation and verification platform providing exactly the same interfaces as those defined by the HLA Interface Specification. Moreover, our platform is based on a new designed causality consistency middleware and might offer the comparison of three kinds of time management services: CO, RO and TSO. The experimental results show that the implementation of the platform only costs small overhead, and that the efficient performance of it is highly effective for the researchers to merely focus on the improvement of designing algorithms.
Optimized Temporal Monitors for SystemC
NASA Technical Reports Server (NTRS)
Tabakov, Deian; Rozier, Kristin Y.; Vardi, Moshe Y.
2012-01-01
SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.
Leveraging pattern matching to solve SRAM verification challenges at advanced nodes
NASA Astrophysics Data System (ADS)
Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan
2018-03-01
Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.
LH2 on-orbit storage tank support trunnion design and verification
NASA Technical Reports Server (NTRS)
Bailey, W. J.; Fester, D. A.; Toth, J. M., Jr.
1985-01-01
A detailed fatigue analysis was conducted to provide verification of the trunnion design in the reusable Cryogenic Fluid Management Facility for Shuttle flights and to assess the performance capability of the trunnion E-glass/S-glass epoxy composite material. Basic material property data at ambient and liquid hydrogen temperatures support the adequacy of the epoxy composite for seven-mission requirement. Testing of trunnions fabricated to the flight design has verified adequate strength and fatigue properties of the design to meet the requirements of seven Shuttle flights.
The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.
NASA Technical Reports Server (NTRS)
Bernstein, Karen S.; Kujala, Rod; Fogt, Vince; Romine, Paul
2011-01-01
This document establishes the structural requirements for human-rated spaceflight hardware including launch vehicles, spacecraft and payloads. These requirements are applicable to Government Furnished Equipment activities as well as all related contractor, subcontractor and commercial efforts. These requirements are not imposed on systems other than human-rated spacecraft, such as ground test articles, but may be tailored for use in specific cases where it is prudent to do so such as for personnel safety or when assets are at risk. The requirements in this document are focused on design rather than verification. Implementation of the requirements is expected to be described in a Structural Verification Plan (SVP), which should describe the verification of each structural item for the applicable requirements. The SVP may also document unique verifications that meet or exceed these requirements with NASA Technical Authority approval.
Engineering of the LISA Pathfinder mission—making the experiment a practical reality
NASA Astrophysics Data System (ADS)
Warren, Carl; Dunbar, Neil; Backler, Mike
2009-05-01
LISA Pathfinder represents a unique challenge in the development of scientific spacecraft—not only is the LISA Test Package (LTP) payload a complex integrated development, placing stringent requirements on its developers and the spacecraft, but the payload also acts as the core sensor and actuator for the spacecraft, making the tasks of control design, software development and system verification unusually difficult. The micro-propulsion system which provides the remaining actuation also presents substantial development and verification challenges. As the mission approaches the system critical design review, flight hardware is completing verification and the process of verification using software and hardware simulators and test benches is underway. Preparation for operations has started, but critical milestones for LTP and field effect electric propulsion (FEEP) lie ahead. This paper summarizes the status of the present development and outlines the key challenges that must be overcome on the way to launch.
Large - scale Rectangular Ruler Automated Verification Device
NASA Astrophysics Data System (ADS)
Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie
2018-03-01
This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.
Space station prototype Sabatier reactor design verification testing
NASA Technical Reports Server (NTRS)
Cusick, R. J.
1974-01-01
A six-man, flight prototype carbon dioxide reduction subsystem for the SSP ETC/LSS (Space Station Prototype Environmental/Thermal Control and Life Support System) was developed and fabricated for the NASA-Johnson Space Center between February 1971 and October 1973. Component design verification testing was conducted on the Sabatier reactor covering design and off-design conditions as part of this development program. The reactor was designed to convert a minimum of 98 per cent hydrogen to water and methane for both six-man and two-man reactant flow conditions. Important design features of the reactor and test conditions are described. Reactor test results are presented that show design goals were achieved and off-design performance was stable.
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Analog VLSI system for active drag reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, B.; Goodman, R.; Jiang, F.
1996-10-01
In today`s cost-conscious air transportation industry, fuel costs are a substantial economic concern. Drag reduction is an important way to reduce costs. Even a 5% reduction in drag translates into estimated savings of millions of dollars in fuel costs. Drawing inspiration from the structure of shark skin, the authors are building a system to reduce drag along a surface. Our analog VLSI system interfaces with microfabricated, constant-temperature shear stress sensors. It detects regions of high shear stress and outputs a control signal to activate a microactuator. We are in the process of verifying the actual drag reduction by controlling microactuatorsmore » in wind tunnel experiments. We are encouraged that an approach similar to one that biology employs provides a very useful contribution to the problem of drag reduction. 9 refs., 21 figs.« less
NASA Astrophysics Data System (ADS)
Nomaguch, Yutaka; Fujita, Kikuo
This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.
TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) development
NASA Technical Reports Server (NTRS)
Shimamoto, Mike S.
1993-01-01
The development of an anthropomorphic, undersea manipulator system, the TeleOperator/telePresence System (TOPS) Concept Verification Model (CVM) is described. The TOPS system's design philosophy, which results from NRaD's experience in undersea vehicles and manipulator systems development and operations, is presented. The TOPS design approach, task teams, manipulator, and vision system development and results, conclusions, and recommendations are presented.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
A VLSI Implementation of Four-Phase Lift Controller Using Verilog HDL
NASA Astrophysics Data System (ADS)
Kumar, Manish; Singh, Priyanka; Singh, Shesha
2017-08-01
With the advent of an era of staggering range of new technologies to provide ease of mobility and transportation elevators have become an essential component of all high rise buildings. An elevator is a type of vertical transportation that moves people between the floors of a high rise building. A four-Phase lift controller modeled on Verilog HDL code using Finite State Machine (FSM) has been presented in this paper. Verilog HDL helps in automated analysis and simulation of lift controller circuit. This design is based on synchronous input that operates on a fixed frequency. The Lift motion is controlled by means of accepting the destination floor level as input and generate control signal as output. In the proposed design a Verilog RTL code is developed and verified. Project Navigator of XILINX has been used as a code writing platform and results were simulated using Modelsim 5.4a simulator. This paper discusses the overall evolution of design and also discusses simulated results.
Driving a car with custom-designed fuzzy inferencing VLSI chips and boards
NASA Technical Reports Server (NTRS)
Pin, Francois G.; Watanabe, Yutaka
1993-01-01
Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human-like reasoning schemes which may include as little as six elemental behaviors embodied in fourteen qualitative rules.
An efficient current-based logic cell model for crosstalk delay analysis
NASA Astrophysics Data System (ADS)
Nazarian, Shahin; Das, Debasish
2013-04-01
Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.
Reed Solomon codes for error control in byte organized computer memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Control/structure interaction design methodology
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.; Layman, William E.
1989-01-01
The Control Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts (such as active structure) and new tools (such as a combined structure and control optimization algorithm) and their verification in ground and possibly flight test. The new CSI design methodology is centered around interdisciplinary engineers using new tools that closely integrate structures and controls. Verification is an important CSI theme and analysts will be closely integrated to the CSI Test Bed laboratory. Components, concepts, tools and algorithms will be developed and tested in the lab and in future Shuttle-based flight experiments. The design methodology is summarized in block diagrams depicting the evolution of a spacecraft design and descriptions of analytical capabilities used in the process. The multiyear JPL CSI implementation plan is described along with the essentials of several new tools. A distributed network of computation servers and workstations was designed that will provide a state-of-the-art development base for the CSI technologies.
Design for Verification: Using Design Patterns to Build Reliable Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)
2003-01-01
Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.
Formal specification and verification of Ada software
NASA Technical Reports Server (NTRS)
Hird, Geoffrey R.
1991-01-01
The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.
Verification bias an underrecognized source of error in assessing the efficacy of medical imaging.
Petscavage, Jonelle M; Richardson, Michael L; Carr, Robert B
2011-03-01
Diagnostic tests are validated by comparison against a "gold standard" reference test. When the reference test is invasive or expensive, it may not be applied to all patients. This can result in biased estimates of the sensitivity and specificity of the diagnostic test. This type of bias is called "verification bias," and is a common problem in imaging research. The purpose of our study is to estimate the prevalence of verification bias in the recent radiology literature. All issues of the American Journal of Roentgenology (AJR), Academic Radiology, Radiology, and European Journal of Radiology (EJR) between November 2006 and October 2009 were reviewed for original research articles mentioning sensitivity or specificity as endpoints. Articles were read to determine whether verification bias was present and searched for author recognition of verification bias in the design. During 3 years, these journals published 2969 original research articles. A total of 776 articles used sensitivity or specificity as an outcome. Of these, 211 articles demonstrated potential verification bias. The fraction of articles with potential bias was respectively 36.4%, 23.4%, 29.5%, and 13.4% for AJR, Academic Radiology, Radiology, and EJR. The total fraction of papers with potential bias in which the authors acknowledged this bias was 17.1%. Verification bias is a common and frequently unacknowledged source of error in efficacy studies of diagnostic imaging. Bias can often be eliminated by proper study design. When it cannot be eliminated, it should be estimated and acknowledged. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2014-12-01
Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.
Projected Impact of Compositional Verification on Current and Future Aviation Safety Risk
NASA Technical Reports Server (NTRS)
Reveley, Mary S.; Withrow, Colleen A.; Leone, Karen M.; Jones, Sharon M.
2014-01-01
The projected impact of compositional verification research conducted by the National Aeronautic and Space Administration System-Wide Safety and Assurance Technologies on aviation safety risk was assessed. Software and compositional verification was described. Traditional verification techniques have two major problems: testing at the prototype stage where error discovery can be quite costly and the inability to test for all potential interactions leaving some errors undetected until used by the end user. Increasingly complex and nondeterministic aviation systems are becoming too large for these tools to check and verify. Compositional verification is a "divide and conquer" solution to addressing increasingly larger and more complex systems. A review of compositional verification research being conducted by academia, industry, and Government agencies is provided. Forty-four aviation safety risks in the Biennial NextGen Safety Issues Survey were identified that could be impacted by compositional verification and grouped into five categories: automation design; system complexity; software, flight control, or equipment failure or malfunction; new technology or operations; and verification and validation. One capability, 1 research action, 5 operational improvements, and 13 enablers within the Federal Aviation Administration Joint Planning and Development Office Integrated Work Plan that could be addressed by compositional verification were identified.
Integrated testing and verification system for research flight software
NASA Technical Reports Server (NTRS)
Taylor, R. N.
1979-01-01
The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.
Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth
2016-01-01
If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.
Crewed Space Vehicle Battery Safety Requirements
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith A.; Darcy, Eric C.
2014-01-01
This requirements document is applicable to all batteries on crewed spacecraft, including vehicle, payload, and crew equipment batteries. It defines the specific provisions required to design a battery that is safe for ground personnel and crew members to handle and/or operate during all applicable phases of crewed missions, safe for use in the enclosed environment of a crewed space vehicle, and safe for use in launch vehicles, as well as in unpressurized spaces adjacent to the habitable portion of a space vehicle. The required provisions encompass hazard controls, design evaluation, and verification. The extent of the hazard controls and verification required depends on the applicability and credibility of the hazard to the specific battery design and applicable missions under review. Evaluation of the design and verification program results shall be completed prior to certification for flight and ground operations. This requirements document is geared toward the designers of battery systems to be used in crewed vehicles, crew equipment, crew suits, or batteries to be used in crewed vehicle systems and payloads (or experiments). This requirements document also applies to ground handling and testing of flight batteries. Specific design and verification requirements for a battery are dependent upon the battery chemistry, capacity, complexity, charging, environment, and application. The variety of battery chemistries available, combined with the variety of battery-powered applications, results in each battery application having specific, unique requirements pertinent to the specific battery application. However, there are basic requirements for all battery designs and applications, which are listed in section 4. Section 5 includes a description of hazards and controls and also includes requirements.
NASA Astrophysics Data System (ADS)
Magazzù, G.; Borgese, G.; Costantino, N.; Fanucci, L.; Incandela, J.; Saponara, S.
2013-02-01
In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.
VLSI design of lossless frame recompression using multi-orientation prediction
NASA Astrophysics Data System (ADS)
Lee, Yu-Hsuan; You, Yi-Lun; Chen, Yi-Guo
2016-01-01
Pursuing an experience of high-end visual quality drives human to demand a higher display resolution and a higher frame rate. Hence, a lot of powerful coding tools are aggregated together in emerging video coding standards to improve coding efficiency. This also makes video coding standards suffer from two design challenges: heavy computation and tremendous memory bandwidth. The first issue can be properly solved by a careful hardware architecture design with advanced semiconductor processes. Nevertheless, the second one becomes a critical design bottleneck for a modern video coding system. In this article, a lossless frame recompression using multi-orientation prediction technique is proposed to overcome this bottleneck. This work is realised into a silicon chip with the technology of TSMC 0.18 µm CMOS process. Its encoding capability can reach full-HD (1920 × 1080)@48 fps. The chip power consumption is 17.31 mW@100 MHz. Core area and chip area are 0.83 × 0.83 mm2 and 1.20 × 1.20 mm2, respectively. Experiment results demonstrate that this work exhibits an outstanding performance on lossless compression ratio with a competitive hardware performance.
VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate
NASA Astrophysics Data System (ADS)
Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab
2017-08-01
Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.
Temporal coding in a silicon network of integrate-and-fire neurons.
Liu, Shih-Chii; Douglas, Rodney
2004-09-01
Spatio-temporal processing of spike trains by neuronal networks depends on a variety of mechanisms distributed across synapses, dendrites, and somata. In natural systems, the spike trains and the processing mechanisms cohere though their common physical instantiation. This coherence is lost when the natural system is encoded for simulation on a general purpose computer. By contrast, analog VLSI circuits are, like neurons, inherently related by their real-time physics, and so, could provide a useful substrate for exploring neuronlike event-based processing. Here, we describe a hybrid analog-digital VLSI chip comprising a set of integrate-and-fire neurons and short-term dynamical synapses that can be configured into simple network architectures with some properties of neocortical neuronal circuits. We show that, despite considerable fabrication variance in the properties of individual neurons, the chip offers a viable substrate for exploring real-time spike-based processing in networks of neurons.
Impact of VLSI/VHSIC on satellite on-board signal processing
NASA Astrophysics Data System (ADS)
Aanstoos, J. V.; Ruedger, W. H.; Snyder, W. E.; Kelly, W. L.
Forecasted improvements in IC fabrication techniques, such as the use of X-ray lithography, are expected to yield submicron circuit feature sizes within the decade of the 1980s. As dimensions decrease, reliability, cost, speed, power consumption and density improvements will be realized which have a significant impact on the capabilities of onboard spacecraft signal processing functions. This will in turn result in increases of the intelligence that may be deployed on spaceborne remote sensing platforms. Among programs oriented toward such goals are the silicon-based Very High Speed Integrated Circuit (VHSIC) researches sponsored by the U.S. Department of Defense, and efforts toward the development of GaAs devices which will compete with silicon VLSI technology for future applications. GaAs has an electron mobility which is five to six times that of silicon, and promises commensurate computation speed increases under low field conditions.
High data rate Reed-Solomon encoding and decoding using VLSI technology
NASA Technical Reports Server (NTRS)
Miller, Warner; Morakis, James
1987-01-01
Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.
Yu, Theodore; Cauwenberghs, Gert
2009-01-01
We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.
Modeling selective attention using a neuromorphic analog VLSI device.
Indiveri, G
2000-12-01
Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.
Mixed-mode VLSI optic flow sensors for in-flight control of a micro air vehicle
NASA Astrophysics Data System (ADS)
Barrows, Geoffrey L.; Neely, C.
2000-11-01
NRL is developing compact optic flow sensors for use in a variety of small-scale navigation and collision avoidance tasks. These sensors are being developed for use in micro air vehicles (MAVs), which are autonomous aircraft whose maximum dimension is on the order of 15 cm. To achieve desired weight specifications of 1 - 2 grams, mixed-signal VLSI circuitry is being used to develop compact focal plane sensors that directly compute optic flow. As an interim proof of principle, we have constructed a sensor comprising a focal plane sensor head with on-chip processing and a back-end PIC microcontroller. This interim sensors weighs approximately 25 grams and is able to measure optic flow with real-world and low-contrast textures. Variations of this sensor have been used to control the flight of a glider in real-time to avoid collisions with walls.
Study of molybdenum-aluminum interdiffusion kinetics and contact resistance for VLSI applications
NASA Astrophysics Data System (ADS)
Singh, R. N.; Brown, D. M.; Kim, M. J.; Smith, G. A.
1985-12-01
Interdiffusion barrier characteristics of molybdenum thin film with aluminum-1% Si is studied between 733 and 763 K via sheet and contact resistance measurements, Rutherford backscattering spectrometry, secondary ion mass spectrometry, and x-ray diffraction analysis. The results indicate that thermal annealing of Mo/Al-1% Si thin film couples leads to MoAl12 compound formation initially as a nonplanar front, but extensive annealing results in complete transformation of Al-1% Si to MoAl12 and a significant increase in contact resistance. The interdiffusion kinetics is diffusion controlled and shows parabolic time dependence, incubation periods, and extremely high activation energy value of 5.9 eV. The incubation periods and an high activation energy values are explained by the presence of silicon precipitates at the Mo/Al-1% Si interface. Implications of these observations to VLSI device characteristics are discussed and a safe time-temperature processing regime is proposed.
37 CFR 261.7 - Verification of royalty payments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... may conduct a single audit of a Designated Agent upon reasonable notice and during reasonable business... COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES RATES AND TERMS FOR ELIGIBLE NONSUBSCRIPTION.... This section prescribes general rules pertaining to the verification by any Copyright Owner or...
20 CFR 632.77 - Participant eligibility determination.
Code of Federal Regulations, 2011 CFR
2011-04-01
... NATIVE AMERICAN EMPLOYMENT AND TRAINING PROGRAMS Program Design and Management § 632.77 Participant... maintaining a system which reasonably ensures an accurate determination and subsequent verification of... information is subject to verification and that falsification of the application shall be grounds for the...
20 CFR 632.77 - Participant eligibility determination.
Code of Federal Regulations, 2010 CFR
2010-04-01
... NATIVE AMERICAN EMPLOYMENT AND TRAINING PROGRAMS Program Design and Management § 632.77 Participant... maintaining a system which reasonably ensures an accurate determination and subsequent verification of... information is subject to verification and that falsification of the application shall be grounds for the...
Model-based engineering for medical-device software.
Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi
2010-01-01
This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.
High-speed and low-power repeater for VLSI interconnects
NASA Astrophysics Data System (ADS)
Karthikeyan, A.; Mallick, P. S.
2017-10-01
This paper proposes a repeater for boosting the speed of interconnects with low power dissipation. We have designed and implemented at 45 and 32 nm technology nodes. Delay and power dissipation performances are analyzed for various voltage levels at these technology nodes using Spice simulations. A significant reduction in delay and power dissipation are observed compared to a conventional repeater. The results show that the proposed high-speed low-power repeater has a reduced delay for higher load capacitance. The proposed repeater is also compared with LPTG CMOS repeater, and the results shows that the proposed repeater has reduced delay. The proposed repeater can be suitable for high-speed global interconnects and has the capacity to drive large loads.
Programmable synaptic devices for electronic neural nets
NASA Technical Reports Server (NTRS)
Moopenn, A.; Thakoor, A. P.
1990-01-01
The architecture, design, and operational characteristics of custom VLSI and thin film synaptic devices are described. The devices include CMOS-based synaptic chips containing 1024 reprogrammable synapses with a 6-bit dynamic range, and nonvolatile, write-once, binary synaptic arrays based on memory switching in hydrogenated amorphous silicon films. Their suitability for embodiment of fully parallel and analog neural hardware is discussed. Specifically, a neural network solution to an assignment problem of combinatorial global optimization, implemented in fully parallel hardware using the synaptic chips, is described. The network's ability to provide optimal and near optimal solutions over a time scale of few neuron time constants has been demonstrated and suggests a speedup improvement of several orders of magnitude over conventional search methods.
Design and verification of distributed logic controllers with application of Petri nets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał
2015-12-31
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Design verification test matrix development for the STME thrust chamber assembly
NASA Technical Reports Server (NTRS)
Dexter, Carol E.; Elam, Sandra K.; Sparks, David L.
1993-01-01
This report presents the results of the test matrix development for design verification at the component level for the National Launch System (NLS) space transportation main engine (STME) thrust chamber assembly (TCA) components including the following: injector, combustion chamber, and nozzle. A systematic approach was used in the development of the minimum recommended TCA matrix resulting in a minimum number of hardware units and a minimum number of hot fire tests.
Design and verification of distributed logic controllers with application of Petri nets
NASA Astrophysics Data System (ADS)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika
2015-12-01
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Formal Verification of Complex Systems based on SysML Functional Requirements
2014-12-23
Formal Verification of Complex Systems based on SysML Functional Requirements Hoda Mehrpouyan1, Irem Y. Tumer2, Chris Hoyle2, Dimitra Giannakopoulou3...requirements for design of complex engineered systems. The proposed ap- proach combines a SysML modeling approach to document and structure safety requirements...methods and tools to support the integration of safety into the design solution. 2.1. SysML for Complex Engineered Systems Traditional methods and tools
NEXT Thruster Component Verification Testing
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Sovey, James S.
2007-01-01
Component testing is a critical part of thruster life validation activities under NASA s Evolutionary Xenon Thruster (NEXT) project testing. The high voltage propellant isolators were selected for design verification testing. Even though they are based on a heritage design, design changes were made because the isolators will be operated under different environmental conditions including temperature, voltage, and pressure. The life test of two NEXT isolators was therefore initiated and has accumulated more than 10,000 hr of operation. Measurements to date indicate only a negligibly small increase in leakage current. The cathode heaters were also selected for verification testing. The technology to fabricate these heaters, developed for the International Space Station plasma contactor hollow cathode assembly, was transferred to Aerojet for the fabrication of the NEXT prototype model ion thrusters. Testing the contractor-fabricated heaters is necessary to validate fabrication processes for high reliability heaters. This paper documents the status of the propellant isolator and cathode heater tests.
Kant, Nasir Ali; Dar, Mohamad Rafiq; Khanday, Farooq Ahmad
2015-01-01
The output of every neuron in neural network is specified by the employed activation function (AF) and therefore forms the heart of neural networks. As far as the design of artificial neural networks (ANNs) is concerned, hardware approach is preferred over software one because it promises the full utilization of the application potential of ANNs. Therefore, besides some arithmetic blocks, designing AF in hardware is the most important for designing ANN. While attempting to design the AF in hardware, the designs should be compatible with the modern Very Large Scale Integration (VLSI) design techniques. In this regard, the implemented designs should: only be in Metal Oxide Semiconductor (MOS) technology in order to be compatible with the digital designs, provide electronic tunability feature, and be able to operate at ultra-low voltage. Companding is one of the promising circuit design techniques for achieving these goals. In this paper, 0.5 V design of Liao's AF using sinh-domain technique is introduced. Furthermore, the function is tested by implementing inertial neuron model. The performance of the AF and inertial neuron model have been evaluated through simulation results, using the PSPICE software with the MOS transistor models provided by the 0.18-μm Taiwan Semiconductor Manufacturer Complementary Metal Oxide Semiconductor (TSM CMOS) process.
Formal System Verification for Trustworthy Embedded Systems
2011-04-19
microkernel basis. We had previously achieved code- level formal verification of the seL4 microkernel [3]. In the present project, over 12 months with 0.6 FTE...project, we designed and implemented a secure network access device (SAC) on top of the verified seL4 microkernel. The device allows a trusted front...Engelhardt, Rafal Kolan- ski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. seL4 : Formal verification of an OS kernel. CACM, 53(6):107
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
Definition of ground test for Large Space Structure (LSS) control verification
NASA Technical Reports Server (NTRS)
Waites, H. B.; Doane, G. B., III; Tollison, D. K.
1984-01-01
An overview for the definition of a ground test for the verification of Large Space Structure (LSS) control is given. The definition contains information on the description of the LSS ground verification experiment, the project management scheme, the design, development, fabrication and checkout of the subsystems, the systems engineering and integration, the hardware subsystems, the software, and a summary which includes future LSS ground test plans. Upon completion of these items, NASA/Marshall Space Flight Center will have an LSS ground test facility which will provide sufficient data on dynamics and control verification of LSS so that LSS flight system operations can be reasonably ensured.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael
1994-01-01
In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.
The U.S. Environmental Protection Agency (EPA) design efficient processes for conducting has created the Environmental Technology perfofl1lance tests of innovative technologies. Verification Program (E TV) to facilitate the deployment of innovative or improved environmental techn...
The U.S. Environmental Protection Agency (EPA) design efficient processes for conducting has created the Environmental Technology perfofl1lance tests of innovative technologies. Verification Program (E TV) to facilitate the deployment of innovative or improved environmental techn...
Validation (not just verification) of Deep Space Missions
NASA Technical Reports Server (NTRS)
Duren, Riley M.
2006-01-01
ion & Validation (V&V) is a widely recognized and critical systems engineering function. However, the often used definition 'Verification proves the design is right; validation proves it is the right design' is rather vague. And while Verification is a reasonably well standardized systems engineering process, Validation is a far more abstract concept and the rigor and scope applied to it varies widely between organizations and individuals. This is reflected in the findings in recent Mishap Reports for several NASA missions, in which shortfalls in Validation (not just Verification) were cited as root- or contributing-factors in catastrophic mission loss. Furthermore, although there is strong agreement in the community that Test is the preferred method for V&V, many people equate 'V&V' with 'Test', such that Analysis and Modeling aren't given comparable attention. Another strong motivator is a realization that the rapid growth in complexity of deep-space missions (particularly Planetary Landers and Space Observatories given their inherent unknowns) is placing greater demands on systems engineers to 'get it right' with Validation.
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2013-12-01
ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.
NASA Astrophysics Data System (ADS)
Mizuno, Hiroyuki
This is a lecture at the 15th anniversary of JICST Chugoku Branch Office. A recent progress of VLSI technologies will make possible to simulate some functions of human beings, the results of which will prepare next new innovation for a coming now century.
Smart vision chips: An overview
NASA Technical Reports Server (NTRS)
Koch, Christof
1994-01-01
This viewgraph presentation presents four working analog VLSI vision chips: (1) time-derivative retina, (2) zero-crossing chip, (3) resistive fuse, and (4) figure-ground chip; work in progress on computing motion and neuromorphic systems; and conceptual and practical lessons learned.
Validation of a SysML based design for wireless sensor networks
NASA Astrophysics Data System (ADS)
Berrachedi, Amel; Rahim, Messaoud; Ioualalen, Malika; Hammad, Ahmed
2017-07-01
When developing complex systems, the requirement for the verification of the systems' design is one of the main challenges. Wireless Sensor Networks (WSNs) are examples of such systems. We address the problem of how WSNs must be designed to fulfil the system requirements. Using the SysML Language, we propose a Model Based System Engineering (MBSE) specification and verification methodology for designing WSNs. This methodology uses SysML to describe the WSNs requirements, structure and behaviour. Then, it translates the SysML elements to an analytic model, specifically, to a Deterministic Stochastic Petri Net. The proposed approach allows to design WSNs and study their behaviors and their energy performances.
Design of the software development and verification system (SWDVS) for shuttle NASA study task 35
NASA Technical Reports Server (NTRS)
Drane, L. W.; Mccoy, B. J.; Silver, L. W.
1973-01-01
An overview of the Software Development and Verification System (SWDVS) for the space shuttle is presented. The design considerations, goals, assumptions, and major features of the design are examined. A scenario that shows three persons involved in flight software development using the SWDVS in response to a program change request is developed. The SWDVS is described from the standpoint of different groups of people with different responsibilities in the shuttle program to show the functional requirements that influenced the SWDVS design. The software elements of the SWDVS that satisfy the requirements of the different groups are identified.
Precision segmented reflector, figure verification sensor
NASA Technical Reports Server (NTRS)
Manhart, Paul K.; Macenka, Steve A.
1989-01-01
The Precision Segmented Reflector (PSR) program currently under way at the Jet Propulsion Laboratory is a test bed and technology demonstration program designed to develop and study the structural and material technologies required for lightweight, precision segmented reflectors. A Figure Verification Sensor (FVS) which is designed to monitor the active control system of the segments is described, a best fit surface is defined, and an image or wavefront quality of the assembled array of reflecting panels is assessed
Verification testing of the Hydro International Up-Flo™ Filter with one filter module and CPZ Mix™ filter media was conducted at the Penn State Harrisburg Environmental Engineering Laboratory in Middletown, Pennsylvania. The Up-Flo™ Filter is designed as a passive, modular filtr...
Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.
2014-01-01
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748
Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S
2013-12-06
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.
Practicing universal design to actual hand tool design process.
Lin, Kai-Chieh; Wu, Chih-Fu
2015-09-01
UD evaluation principles are difficult to implement in product design. This study proposes a methodology for implementing UD in the design process through user participation. The original UD principles and user experience are used to develop the evaluation items. Difference of product types was considered. Factor analysis and Quantification theory type I were used to eliminate considered inappropriate evaluation items and to examine the relationship between evaluation items and product design factors. Product design specifications were established for verification. The results showed that converting user evaluation into crucial design verification factors by the generalized evaluation scale based on product attributes as well as the design factors applications in product design can improve users' UD evaluation. The design process of this study is expected to contribute to user-centered UD application. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Shuttle payload interface verification equipment study. Volume 2: Technical document, part 1
NASA Technical Reports Server (NTRS)
1976-01-01
The technical analysis is reported that was performed during the shuttle payload interface verification equipment study. It describes: (1) the background and intent of the study; (2) study approach and philosophy covering all facets of shuttle payload/cargo integration; (3)shuttle payload integration requirements; (4) preliminary design of the horizontal IVE; (5) vertical IVE concept; and (6) IVE program development plans, schedule and cost. Also included is a payload integration analysis task to identify potential uses in addition to payload interface verification.
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Verification
NASA Technical Reports Server (NTRS)
Hanson, John M.; Beard, Bernard B.
2010-01-01
This paper is focused on applying Monte Carlo simulation to probabilistic launch vehicle design and requirements verification. The approaches developed in this paper can be applied to other complex design efforts as well. Typically the verification must show that requirement "x" is met for at least "y" % of cases, with, say, 10% consumer risk or 90% confidence. Two particular aspects of making these runs for requirements verification will be explored in this paper. First, there are several types of uncertainties that should be handled in different ways, depending on when they become known (or not). The paper describes how to handle different types of uncertainties and how to develop vehicle models that can be used to examine their characteristics. This includes items that are not known exactly during the design phase but that will be known for each assembled vehicle (can be used to determine the payload capability and overall behavior of that vehicle), other items that become known before or on flight day (can be used for flight day trajectory design and go/no go decision), and items that remain unknown on flight day. Second, this paper explains a method (order statistics) for determining whether certain probabilistic requirements are met or not and enables the user to determine how many Monte Carlo samples are required. Order statistics is not new, but may not be known in general to the GN&C community. The methods also apply to determining the design values of parameters of interest in driving the vehicle design. The paper briefly discusses when it is desirable to fit a distribution to the experimental Monte Carlo results rather than using order statistics.
Stacked silicide/silicon mid- to long-wavelength infrared detector
NASA Technical Reports Server (NTRS)
Maserjian, Joseph (Inventor)
1990-01-01
The use of stacked Schottky barriers (16) with epitaxially grown thin silicides (10) combined with selective doping (22) of the barriers provides high quantum efficiency infrared detectors (30) at longer wavelengths that is compatible with existing silicon VLSI technology.
Stacked silicide/silicon mid- to long-wavelength infrared detector
Maserjian, Joseph
1990-03-13
The use of stacked Schottky barriers (16) with epitaxially grown thin silicides (10) combined with selective doping (22) of the barriers provides high quantum efficiency infrared detectors (30) at longer wavelengths that is compatible with existing silicon VLSI technology.
Predicted and tested performance of durable TPS
NASA Technical Reports Server (NTRS)
Shideler, John L.
1992-01-01
The development of thermal protection systems (TPS) for aerospace vehicles involves combining material selection, concept design, and verification tests to evaluate the effectiveness of the system. The present paper reviews verification tests of two metallic and one carbon-carbon thermal protection system. The test conditions are, in general, representative of Space Shuttle design flight conditions which may be more or less severe than conditions required for future space transportation systems. The results of this study are intended to help establish a preliminary data base from which the designers of future entry vehicles can evaluate the applicability of future concepts to their vehicles.
User-friendly design approach for analog layout design
NASA Astrophysics Data System (ADS)
Li, Yongfu; Lee, Zhao Chuan; Tripathi, Vikas; Perez, Valerio; Ong, Yoong Seang; Hui, Chiu Wing
2017-03-01
Analog circuits are sensitives to the changes in the layout environment conditions, manufacturing processes, and variations. This paper presents analog verification flow with five types of analogfocused layout constraint checks to assist engineers in identifying any potential device mismatch and layout drawing mistakes. Compared to several solutions, our approach only requires layout design, which is sufficient to recognize all the matched devices. Our approach simplifies the data preparation and allows seamless integration into the layout environment with minimum disruption to the custom layout flow. Our user-friendly analog verification flow provides the engineer with more confident with their layouts quality.
Under EPA’s Environmental Technology Verification program, which provides objective and scientific third party analysis of new technology that can benefit the environment, a combined heat and power system designed by Martin Machinery was evaluated. This paper provides test result...
This report presents the results of the verification test of the Sharpe Platinum 2013 high-volume, low-pressure gravity-feed spray gun, hereafter referred to as the Sharpe Platinum, which is designed for use in automotive refinishing. The test coating chosen by Sharpe Manufacturi...
Verification testing of the Vortechnics, Inc. Vortechs® System, Model 1000 was conducted on a 0.25 acre portion of an elevated highway near downtown Milwaukee, Wisconsin. The Vortechs is designed to remove settable and floatable pollutants from stormwater runoff. The Vortechs® ...
The Environmental Technology Verification report discusses the technology and performance of the Static Pac System, Phase II, natural gas reciprocating compressor rod packing manufactured by the C. Lee Cook Division, Dover Corporation. The Static Pac System is designed to seal th...
Details on the verification test design, measurement test procedures, and Quality assurance/Quality Control (QA/QC) procedures can be found in the test plan titled Testing and Quality Assurance Plan, MIRATECH Corporation GECO 3100 Air/Fuel Ratio Controller (SRI 2001). It can be d...
EPA‘s Environmental Technology Verification program is designed to further environmental protection by accelerating the acceptance and use of improved and cost effective technologies. This is done by providing high-quality, peer reviewed data on technology performance to those in...
Verification testing of the Hydro International Downstream Defender® was conducted at the Madison Water Utility in Madison, Wisconsin. The system was designed for a drainage basin estimated at 1.9 acres in size, but during intense storm events, the system received water from an a...
The U.S. Environmental Protection Agency (EPA) design efficient processes for conducting has created the Environmental Technology perfofl1lance tests of innovative technologies. Verification Program (E TV) to facilitate the deployment of innovative or improved environmental techn...
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
A digital flight control system verification laboratory
NASA Technical Reports Server (NTRS)
De Feo, P.; Saib, S.
1982-01-01
A NASA/FAA program has been established for the verification and validation of digital flight control systems (DFCS), with the primary objective being the development and analysis of automated verification tools. In order to enhance the capabilities, effectiveness, and ease of using the test environment, software verification tools can be applied. Tool design includes a static analyzer, an assertion generator, a symbolic executor, a dynamic analysis instrument, and an automated documentation generator. Static and dynamic tools are integrated with error detection capabilities, resulting in a facility which analyzes a representative testbed of DFCS software. Future investigations will ensue particularly in the areas of increase in the number of software test tools, and a cost effectiveness assessment.