DOE Office of Scientific and Technical Information (OSTI.GOV)
Goebel, J
2004-02-27
Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations. This article outlinesmore » SCS's computer testing methods and our system acceptance criteria. We expanded the basic ideas to other evaluations such as storage, and we think the methods outlined in this article has helped us choose hardware that is much more stable and supportable than our previous purchases. We have found that commodity hardware ranges in quality, so systematic method and tools for hardware evaluation were necessary. This article is based on one instance of a hardware purchase, but the guidelines apply to the general problem of purchasing commodity computer systems for production computational work.« less
34 CFR 464.42 - What limit applies to purchasing computer hardware and software?
Code of Federal Regulations, 2013 CFR
2013-07-01
... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...
34 CFR 464.42 - What limit applies to purchasing computer hardware and software?
Code of Federal Regulations, 2012 CFR
2012-07-01
... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...
34 CFR 464.42 - What limit applies to purchasing computer hardware and software?
Code of Federal Regulations, 2011 CFR
2011-07-01
... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...
34 CFR 464.42 - What limit applies to purchasing computer hardware and software?
Code of Federal Regulations, 2010 CFR
2010-07-01
... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...
34 CFR 464.42 - What limit applies to purchasing computer hardware and software?
Code of Federal Regulations, 2014 CFR
2014-07-01
... software? 464.42 Section 464.42 Education Regulations of the Offices of the Department of Education... computer hardware and software? Not more than ten percent of funds received under any grant under this part may be used to purchase computer hardware or software. (Authority: 20 U.S.C. 1208aa(f)) ...
Code of Federal Regulations, 2010 CFR
2010-04-01
... increases. (b) At the owner's option, the cost of the computer software may include service contracts to... requirements. (c) The source of funds for the purchase of hardware or software, or contracting for services for... formatted data, including either the purchase and maintenance of computer hardware or software, or both, the...
Purchasing a Microprocessor System for Administrative Use in Schools.
ERIC Educational Resources Information Center
Marshall, David G.
1982-01-01
Describes a series of decision-making steps regarding the purchase of microcomputers for administrative use in schools. Includes such topics as defining information needs and purchasing computer hardware and software. (Author/JJD)
The Hidden Costs of Owning a Microcomputer.
ERIC Educational Resources Information Center
McDole, Thomas L.
Before purchasing computer hardware, individuals must consider the costs associated with the setup and operation of a microcomputer system. Included among the initial costs of purchasing a computer are the costs of the computer, one or more disk drives, a monitor, and a printer as well as the costs of such optional peripheral devices as a plotter…
Microcomputer & Software Use in Michigan's Vocational-Technical Facilities: A Status Report.
ERIC Educational Resources Information Center
Harris, Richard
This report is intended to help Michigan's vocational and technical teachers and administrators make decisions regarding the purchase of microcomputer hardware and software for professional use. Addressed in a discussion of computer hardware are current and planned inventories of microcomputer hardware located in the public vocational and…
34 CFR 647.30 - What are allowable costs?
Code of Federal Regulations, 2010 CFR
2010-07-01
... internships during the summer. (d) Purchase of computer hardware, computer software, or other equipment for student development, project administration, and recordkeeping, if the applicant demonstrates to the...
ERIC Educational Resources Information Center
Curtis, Rick
This paper summarizes information about using computer hardware and software to aid in making purchase decisions that are based on user needs. The two major options in hardware are IBM-compatible machines and the Apple Macintosh line. The three basic software applications include word processing, database management, and spreadsheet applications.…
34 CFR 643.30 - What are allowable costs?
Code of Federal Regulations, 2010 CFR
2010-07-01
... admissions applications or entrance examinations if— (1) A waiver of the fee is unavailable; and (2) The fee... rented space is not owned by the grantee. (f) Purchase of computer hardware, computer software, or other...
Small-College Software Survey.
ERIC Educational Resources Information Center
Birch, Anthony D.
1986-01-01
Computers have a great number of potential uses at the small college. A survey of the role of software in the effective use of computers is described. Hardware characteristics, spreadsheets, purchasing or developing software, and software information are discussed. (Author/MLW)
1993-08-01
pricing and sales, order processing , and purchasing. The class of manufacturing planning functions include aggregate production planning, materials...level. I Depending on the application, each control level will have a number of functions associated with it. For instance, order processing , purchasing...include accounting, sales forecasting, product costing, pricing and sales, order processing , and purchasing. The class of manufacturing planning functions
The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.
ERIC Educational Resources Information Center
Kolner, Stuart J.
1985-01-01
Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…
ERIC Educational Resources Information Center
Shade, Daniel D.
1994-01-01
Provides advice and suggestions for educators or parents who are trying to decide what type of computer to buy to run the latest computer software for children. Suggests that purchasers should buy a computer with as large a hard drive as possible, at least 10 megabytes of RAM, and a CD-ROM drive. (MDM)
Making Informed Decisions: Management Issues Influencing Computers in the Classroom.
ERIC Educational Resources Information Center
Strickland, James
A number of noninstructional factors appear to determine the extent to which computers make a difference in writing instruction. Once computers have been purchased and installed, it is generally school administrators who make management decisions, often from an uninformed pedagogical orientation. Issues such as what hardware and software to buy,…
Budgeting and Funding School Technology: Essential Considerations
ERIC Educational Resources Information Center
Ireh, Maduakolam
2010-01-01
School districts need adequate financial resources to purchase hardware and software, wire their buildings to network computers and other information and communication devices, and connect to the Internet to provide students, teachers, and other school personnel with adequate access to technology. Computers and other peripherals, particularly,…
Computer Series, 4: Bits, Bytes, Boards, Buses, and Beyond.
ERIC Educational Resources Information Center
Gerhold, George; And Others
1979-01-01
Presents a general introduction to microprocessors and microcomputers. Guidelines for purchasing software and hardware are intended to familiarize potential buyers with the current equipment on the market and the terminology in use. (SA)
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
ERIC Educational Resources Information Center
Kinnaman, Daniel E.; And Others
1988-01-01
Reviews four educational software packages for Apple, IBM, and Tandy computers. Includes "How the West was One + Three x Four,""Mavis Beacon Teaches Typing,""Math and Me," and "Write On." Reviews list hardware requirements, emphasis, levels, publisher, purchase agreements, and price. Discusses the strengths…
A 3D-PIV System for Gas Turbine Applications
NASA Astrophysics Data System (ADS)
Acharya, Sumanta
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware, which was purchased, and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after the summary describes the computer cluster, its setup, and some cluster hardware, and presents highlights of those efforts since installation of the cluster.
Accounting Systems for School Districts.
ERIC Educational Resources Information Center
Atwood, E. Barrett, Jr.
1983-01-01
Advises careful analysis and improvement of existing school district accounting systems prior to investment in new ones. Emphasizes the importance of attracting and maintaining quality financial staffs, developing an accounting policies and procedures manual, and designing a good core accounting system before purchasing computer hardware and…
NASA Technical Reports Server (NTRS)
Fournelle, John; Carpenter, Paul
2006-01-01
Modem electron microprobe systems have become increasingly sophisticated. These systems utilize either UNIX or PC computer systems for measurement, automation, and data reduction. These systems have undergone major improvements in processing, storage, display, and communications, due to increased capabilities of hardware and software. Instrument specifications are typically utilized at the time of purchase and concentrate on hardware performance. The microanalysis community includes analysts, researchers, software developers, and manufacturers, who could benefit from exchange of ideas and the ultimate development of core community specifications (CCS) for hardware and software components of microprobe instrumentation and operating systems.
Does Your Technology Support Measure Up?
ERIC Educational Resources Information Center
Abramson, Paul
1998-01-01
Provides tips on ways to keep computer systems in colleges and universities from failing while also controlling costs. Trinity College (Hartford) is used to illustrate how proper staffing, system maintenance, hardware purchasing decisions, technician compensation, and the use of students for maintenance work can effectively support a school's…
24 CFR 578.57 - Homeless Management Information System.
Code of Federal Regulations, 2013 CFR
2013-04-01
... System. 578.57 Section 578.57 Housing and Urban Development Regulations Relating to Housing and Urban... Eligible Costs § 578.57 Homeless Management Information System. (a) Eligible costs. (1) The recipient or... designated by the Continuum of Care, including the costs of: (i) Purchasing or leasing computer hardware; (ii...
24 CFR 578.57 - Homeless Management Information System.
Code of Federal Regulations, 2014 CFR
2014-04-01
... System. 578.57 Section 578.57 Housing and Urban Development Regulations Relating to Housing and Urban... Eligible Costs § 578.57 Homeless Management Information System. (a) Eligible costs. (1) The recipient or... designated by the Continuum of Care, including the costs of: (i) Purchasing or leasing computer hardware; (ii...
Cardiology office computer use: primer, pointers, pitfalls.
Shepard, R B; Blum, R I
1986-10-01
An office computer is a utility, like an automobile, with benefits and costs that are both direct and hidden and potential for disaster. For the cardiologist or cardiovascular surgeon, the increasing power and decreasing costs of computer hardware and the availability of software make use of an office computer system an increasingly attractive possibility. Management of office business functions is common; handling and scientific analysis of practice medical information are less common. The cardiologist can also access national medical information systems for literature searches and for interactive further education. Selection and testing of programs and the entire computer system before purchase of computer hardware will reduce the chances of disappointment or serious problems. Personnel pretraining and planning for office information flow and medical information security are necessary. Some cardiologists design their own office systems, buy hardware and software as needed, write programs for themselves and carry out the implementation themselves. For most cardiologists, the better course will be to take advantage of the professional experience of expert advisors. This article provides a starting point from which the practicing cardiologist can approach considering, specifying or implementing an office computer system for business functions and for scientific analysis of practice results.
Transitioning to digital radiography.
Drost, Wm Tod
2011-04-01
To describe the different forms of digital radiography (DR), image file formats, supporting equipment and services required for DR, storage of digital images, and teleradiology. Purchasing a DR system is a major investment for a veterinary practice. Types of DR systems include computed radiography, charge coupled devices, and direct or indirect DR. Comparison of workflow for analog and DR is presented. On the surface, switching to DR involves the purchase of DR acquisition hardware. The X-ray machine, table and grids used in analog radiography are the same for DR. Realistically, a considerable infrastructure supports the image acquisition hardware. This infrastructure includes monitors, computer workstations, a robust computer network and internet connection, a plan for storage and back up of images, and service contracts. Advantages of DR compared with analog radiography include improved image quality (when used properly), ease of use (more forgiving to the errors of radiographic technique), speed of making a complete study (important for critically ill patients), fewer repeat radiographs, less time looking for imaging studies, less physical storage space, and the ability to easily send images for consultation. With an understanding of the infrastructure requirements, capabilities and limitations of DR, an informed veterinary practice should be better able to make a sound decision about transitioning to DR. © Veterinary Emergency and Critical Care Society 2011.
Final Technical Report for ARRA Funding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusack, Roger; Mans, Jeremiah; Poling, Ronald
Final technical report of the University of Minnesota experimental high energy physics group for ARRA support. The Cryogenic Dark Matter Experiment (CDMS) used the funds received to construct a new passive shield to protect a high-purity germanium detector located in the Soudan mine in Northern Minnesota from cosmic rays. The BESIII and the CMS groups purchased computing hardware to assemble computer farms for data analysis and to generate large volumes of simulated data for comparison with the data collected.
Technology Dollars from Pennies Saved.
ERIC Educational Resources Information Center
Anderson, Mary Alice
1996-01-01
Suggests ways to stretch media center budgets for technology, based on experiences at Winona Middle School (Minnesota). Topics include keeping statistics, hardware purchase and warranty information, centralizing purchases, planning for the reallocation of hardware and software, creative financing, working with business and community groups, staff…
Software for Managing Inventory of Flight Hardware
NASA Technical Reports Server (NTRS)
Salisbury, John; Savage, Scott; Thomas, Shirman
2003-01-01
The Flight Hardware Support Request System (FHSRS) is a computer program that relieves engineers at Marshall Space Flight Center (MSFC) of most of the non-engineering administrative burden of managing an inventory of flight hardware. The FHSRS can also be adapted to perform similar functions for other organizations. The FHSRS affords a combination of capabilities, including those formerly provided by three separate programs in purchasing, inventorying, and inspecting hardware. The FHSRS provides a Web-based interface with a server computer that supports a relational database of inventory; electronic routing of requests and approvals; and electronic documentation from initial request through implementation of quality criteria, acquisition, receipt, inspection, storage, and final issue of flight materials and components. The database lists both hardware acquired for current projects and residual hardware from previous projects. The increased visibility of residual flight components provided by the FHSRS has dramatically improved the re-utilization of materials in lieu of new procurements, resulting in a cost savings of over $1.7 million. The FHSRS includes subprograms for manipulating the data in the database, informing of the status of a request or an item of hardware, and searching the database on any physical or other technical characteristic of a component or material. The software structure forces normalization of the data to facilitate inquiries and searches for which users have entered mixed or inconsistent values.
Use of Field Programmable Gate Array Technology in Future Space Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Tate, Robert
2005-01-01
Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.
ERIC Educational Resources Information Center
Nevada Univ. and Community Coll. System, Reno. Office of the Chancellor.
In 1997 the Nevada State Legislature approved Assembly Bill 606 (AB606), which appropriated funds to the University and Community College System of Nevada (UCCSN) for the purchase of computer hardware and software, and communication services. In July of that year, the K-16 Partnership for Distance Learning adopted three priorities that have guided…
[Hardware for graphics systems].
Goetz, C
1991-02-01
In all personal computer applications, be it for private or professional use, the decision of which "brand" of computer to buy is of central importance. In the USA Apple computers are mainly used in universities, while in Europe computers of the so-called "industry standard" by IBM (or clones thereof) have been increasingly used for many years. Independently of any brand name considerations, the computer components purchased must meet the current (and projected) needs of the user. Graphic capabilities and standards, processor speed, the use of co-processors, as well as input and output devices such as "mouse", printers and scanners are discussed. This overview is meant to serve as a decision aid. Potential users are given a short but detailed summary of current technical features.
Cloud computing applications for biomedical science: A perspective.
Navale, Vivek; Bourne, Philip E
2018-06-01
Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.
Cloud computing applications for biomedical science: A perspective
2018-01-01
Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176
Evaluation and purchase of confocal microscopes: numerous factors to consider.
Zucker, Robert M; Chua, Michael
2010-10-01
The purchase of a confocal microscope is a difficult decision. Many factors need to be considered, which include hardware, software, company, support, service, and price. These issues are discussed to help guide the purchasing process. © 2010 by John Wiley & Sons, Inc.
Large Data at Small Universities: Astronomical processing using a computer classroom
NASA Astrophysics Data System (ADS)
Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen
2016-06-01
The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.
Evaluation and purchase of an analytical flow cytometer: some of the numerous factors to consider.
Zucker, Robert M; Fisher, Nancy C
2013-01-01
When purchasing a flow cytometer, the decision of which brand, model, specifications, and accessories may be challenging. The decisions should initially be guided by the specific applications intended for the instrument. However, many other factors need to be considered, which include hardware, software, quality assurance, support, service, and price and recommendations from colleagues. These issues are discussed to help guide the purchasing process.
Internet-based computer technology on radiotherapy.
Chow, James C L
2017-01-01
Recent rapid development of Internet-based computer technologies has made possible many novel applications in radiation dose delivery. However, translational speed of applying these new technologies in radiotherapy could hardly catch up due to the complex commissioning process and quality assurance protocol. Implementing novel Internet-based technology in radiotherapy requires corresponding design of algorithm and infrastructure of the application, set up of related clinical policies, purchase and development of software and hardware, computer programming and debugging, and national to international collaboration. Although such implementation processes are time consuming, some recent computer advancements in the radiation dose delivery are still noticeable. In this review, we will present the background and concept of some recent Internet-based computer technologies such as cloud computing, big data processing and machine learning, followed by their potential applications in radiotherapy, such as treatment planning and dose delivery. We will also discuss the current progress of these applications and their impacts on radiotherapy. We will explore and evaluate the expected benefits and challenges in implementation as well.
Avionics Simulation, Development and Software Engineering
NASA Technical Reports Server (NTRS)
Francis, Ronald C.; Settle, Gray; Tobbe, Patrick A.; Kissel, Ralph; Glaese, John; Blanche, Jim; Wallace, L. D.
2001-01-01
This monthly report summarizes the work performed under contract NAS8-00114 for Marshall Space Flight Center in the following tasks: 1) Purchase Order No. H-32831D, Task Order 001A, GPB Program Software Oversight; 2) Purchase Order No. H-32832D, Task Order 002, ISS EXPRESS Racks Software Support; 3) Purchase Order No. H-32833D, Task Order 003, SSRMS Math Model Integration; 4) Purchase Order No. H-32834D, Task Order 004, GPB Program Hardware Oversight; 5) Purchase Order No. H-32835D, Task Order 005, Electrodynamic Tether Operations and Control Analysis; 6) Purchase Order No. H-32837D, Task Order 007, SRB Command Receiver/Decoder; and 7) Purchase Order No. H-32838D, Task Order 008, AVGS/DART SW and Simulation Support
The "Make or Buy" Decision: Five Main Points to Consider
ERIC Educational Resources Information Center
Archer, Mary Ann E.
1978-01-01
Five points which should be considered when making decisions about whether to purchase magnetic tapes for in-hours searching by batch processing, purchase terminals and contract with on-line vendors, or contract with information brokers for retrospective searching or SDI are availability of information in the most useful form, hardware and…
David Florida Laboratory Thermal Vacuum Data Processing System
NASA Technical Reports Server (NTRS)
Choueiry, Elie
1994-01-01
During 1991, the Space Simulation Facility conducted a survey to assess the requirements and analyze the merits for purchasing a new thermal vacuum data processing system for its facilities. A new, integrated, cost effective PC-based system was purchased which uses commercial off-the-shelf software for operation and control. This system can be easily reconfigured and allows its users to access a local area network. In addition, it provides superior performance compared to that of the former system which used an outdated mini-computer and peripheral hardware. This paper provides essential background on the old data processing system's features, capabilities, and the performance criteria that drove the genesis of its successor. This paper concludes with a detailed discussion of the thermal vacuum data processing system's components, features, and its important role in supporting our space-simulation environment and our capabilities for spacecraft testing. The new system was tested during the ANIK E spacecraft test, and was fully operational in November 1991.
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
Mobile Clinical Decision Support Systems in Our Hands - Great Potential but also a Concern.
Masic, Izet; Begic, Edin
2016-01-01
Due to the powerful computer resources as well as the availability of today's mobile devices, a special field of mobile systems for clinical decision support in medicine has been developed. The benefits of these applications (systems) are: availability of necessary hardware (mobile phones, tablets and phablets are widespread, and can be purchased at a relatively affordable price), availability of mobile applications (free or for a "small" amount of money) and also mobile applications are tailored for easy use and save time of clinicians in their daily work. In these systems lies a huge potential, and certainly a great economic benefit, so this issue must be approached multidisciplinary.
An Automated Medical Information Management System (OpScan-MIMS) in a Clinical Setting
Margolis, S.; Baker, T.G.; Ritchey, M.G.; Alterescu, S.; Friedman, C.
1981-01-01
This paper describes an automated medical information management system within a clinic setting. The system includes an optically scanned data entry system (OpScan), a generalized, interactive retrieval and storage software system(Medical Information Management System, MIMS) and the use of time-sharing. The system has the advantages of minimal hardware purchase and maintenance, rapid data entry and retrieval, user-created programs, no need for user knowledge of computer language or technology and is cost effective. The OpScan-MIMS system has been operational for approximately 16 months in a sexually transmitted disease clinic. The system's application to medical audit, quality assurance, clinic management and clinical training are demonstrated.
Hardware-Assisted Large-Scale Neuroevolution for Multiagent Learning
2014-12-30
SECURITY CLASSIFICATION OF: This DURIP equipment award was used to purchase, install, and bring on-line two Berkeley Emulation Engines ( BEEs ) and two...mini- BEE machines to establish an FPGA-based high-performance multiagent training platform and its associated software. This acquisition of BEE4-W...Platform; Probabilistic Domain Transformation; Hardware-Assisted; FPGA; BEE ; Hive Brain; Multiagent. REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S
Communications Support for National Flight Data Center Information System.
1980-11-01
funtions : 0 Establishment and termination, * Message transfer, 0 Retransmission of blocks, Establishment and Termination: the establishment procedure...relate to hardware components, transmission facilities and cost relationships . The costs are grouped into one-time and recurring costs. L.2 HARDWARE...the NADIN switching center in Atlanta. The purchase and installation costs are estimated to be $1000. L.4 COST RELATIONSHIPS In order to accurately
Features of commercial computer software systems for medical examiners and coroners.
Hanzlick, R L; Parrish, R G; Ing, R
1993-12-01
There are many ways of automating medical examiner and coroner offices, one of which is to purchase commercial software products specifically designed for death investigation. We surveyed four companies that offer such products and requested information regarding each company and its hardware, software, operating systems, peripheral devices, applications, networking options, programming language, querying capability, coding systems, prices, customer support, and number and size of offices using the product. Although the four products (CME2, ForenCIS, InQuest, and Medical Examiner's Software System) are similar in many respects and each can be installed on personal computers, there are differences among the products with regard to cost, applications, and the other features. Death investigators interested in office automation should explore these products to determine the usefulness of each in comparison with the others and in comparison with general-purpose, off-the-shelf databases and software adaptable to death investigation needs.
The Evolution of Dust in the Multiphase ISM: Grain Destruction Processes
NASA Technical Reports Server (NTRS)
Wolfire, Mark
1999-01-01
This proposal covered year one of a long term project in which we acquired the necessary hardware and softwaxe needed to calculate grain destruction processes in the interstellar medium (ISM). The long term goal of this research is to develop a model for the dust evolution in the ISM capable of explaining observations of elemental depletions, the grain size distribution, and the emission characteristics of the ISM from the X-ray through the FIR. We purchased a SUN Ultra 10 workstation and peripheral devices including an Exabyte Tape drive, HP Laser Printer, and Seagate External Hard Disk. The PI installed the hardware and Solaris operating system on the workstation and integrated the hardware into the network. Software was also purchased to enable connections to the workstation from a PC (Hummingbird Exceed). Additional freeware required to carry out the proposed program was installed on the system including compilers (g77, gcc, g++), editors (emacs), a markup language (LaTeX), and display programs (WIP, XV, SAOtng). We have also successfully modified the required plot files to work with our system which display the results of grain processing.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
NASA Astrophysics Data System (ADS)
Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim
2011-03-01
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Hardware based redundant multi-threading inside a GPU for improved reliability
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-05-05
A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.
Redding Responder Field Test - UTC
DOT National Transportation Integrated Search
2008-10-30
This UTC project facilitated field testing and evaluation of the "Responder" system between Phases 1 and 2 of the Redding Responder Project, sponsored by the California Department of Transportation. A pilot system, with hardware purchased by Caltrans...
NASA Technical Reports Server (NTRS)
Butler, C.; Kindle, E. C.
1984-01-01
The capabilities of the DIAL data acquisition system (DAS) for the remote measurement of atmospheric trace gas concentrations from ground and aircraft platforms were extended through the purchase and integration of other hardware and the implementation of improved software. An operational manual for the current system is presented. Hardware and peripheral device registers are outlined only as an aid in debugging any DAS problems which may arise.
Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F
1997-12-01
Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.
Graphics supercomputer for computational fluid dynamics research
NASA Astrophysics Data System (ADS)
Liaw, Goang S.
1994-11-01
The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and core capital. (b) Computation of core and tangible capital. (1) Purchased credit card relationships may be included (that is, not deducted) in computing core capital in accordance with the... restrictions in this section, mortgage servicing assets may be included in computing core and tangible capital...
How to Purchase, Set Up, & Safeguard a CD-ROM Network.
ERIC Educational Resources Information Center
Almquist, Arne J.
1996-01-01
Presents an overview of the hardware and software required to network CD-ROMs in schools. Topics include network infrastructures, networking software, file server-based systems, CD-ROM servers, vendors of network components, workstations, network utilities, and network management. (LRW)
Profiling an application for power consumption during execution on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2012-08-21
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Goddard Space Flight Center's Structural Dynamics Data Acquisition System
NASA Technical Reports Server (NTRS)
McLeod, Christopher
2004-01-01
Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAB from The MathWorks. This paper will describe the design and development of the new data acquisition and analysis system.
Goddard Space Flight Center's Structural Dynamics Data Acquisition System
NASA Technical Reports Server (NTRS)
McLeod, Christopher
2004-01-01
Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAF3 from The Mathworks. This paper will describe the design and development of the new data acquisition and analysis system.
Round Girls in Square Computers: Feminist Perspectives on the Aesthetics of Computer Hardware.
ERIC Educational Resources Information Center
Carr-Chellman, Alison A.; Marra, Rose M.; Roberts, Shari L.
2002-01-01
Considers issues related to computer hardware, aesthetics, and gender. Explores how gender has influenced the design of computer hardware and how these gender-driven aesthetics may have worked to maintain, extend, or alter gender distinctions, roles, and stereotypes; discusses masculine media representations; and presents an alternative model.…
Cellular computational platform and neurally inspired elements thereof
Okandan, Murat
2016-11-22
A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for Series I savings bonds. (b) Computation of amount for gifts. Bonds purchased or transferred as gifts will be included in the computation of the purchase limitation for the account of the recipient... and Series I savings bonds may I purchase in one year? 363.52 Section 363.52 Money and Finance...
Computer hardware fault administration
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-09-14
Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.
ERIC Educational Resources Information Center
Dessy, Raymond E., Ed.
1986-01-01
Discusses topics involved in the selection of an analog-to-digital (ADC) converter and associated front-end signal conditioning hardware. Reviews what types of ADC's are available, best type for particular application, conversion rates, amplification, filtering, noise, and compatibility issues. Suggests purchase strategy and supplies names and…
Computer-Based Instruction for Purchasing Skills
ERIC Educational Resources Information Center
Ayres, Kevin M.; Langone, John; Boon, Richard T.; Norman, Audrey
2006-01-01
The purpose of this study was to investigate use of computers and video technologies to teach students to correctly make purchases in a community grocery store using the dollar plus purchasing strategy. Four middle school students diagnosed with intellectual disabilities participated in this study. A multiple probe across participants research…
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Strategic directions of computing at Fermilab
NASA Astrophysics Data System (ADS)
Wolbers, Stephen
1998-05-01
Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.
Inventory and Billing Systems for Multiple Users.
ERIC Educational Resources Information Center
Frazier, Lavon
Washington State University developed a comprehensive supplies inventory system and a generalized billing system with multiple users in mind. The supplies inventory control system developed for Central Stores, a self-sustaining service center that purchases and warehouses office, laboratory, and hardware supplies, was called AIMS, An Inventory…
CANES Contracting Strategies for Full Deployment
2012-01-01
9 CANES Program Functions in Full Deployment...contractors will design CANES, identifying specific hardware and developing the integration software necessary to consolidate existing C4I functions . At...would be responsible for execut- ing the purchased design and assembling the systems, ensuring that the integration software is functioning . An
Film versus Video: Comparing the Forms.
ERIC Educational Resources Information Center
Kreamer, Jean T.
1985-01-01
Compares specific aspects of 16mm film and the videocassette that should be considered in making media software and hardware purchase decisions for library media collections: aesthetics and intended audience size, compatibility and format, costs, copyright, market trends, and major advantages and disadvantages. A four-item bibliography is…
Programmable hardware for reconfigurable computing systems
NASA Astrophysics Data System (ADS)
Smith, Stephen
1996-10-01
In 1945 the work of J. von Neumann and H. Goldstein created the principal architecture for electronic computation that has now lasted fifty years. Nevertheless alternative architectures have been created that have computational capability, for special tasks, far beyond that feasible with von Neumann machines. The emergence of high capacity programmable logic devices has made the realization of these architectures practical. The original ENIAC and EDVAC machines were conceived to solve special mathematical problems that were far from today's concept of 'killer applications.' In a similar vein programmable hardware computation is being used today to solve unique mathematical problems. Our programmable hardware activity is focused on the research and development of novel computational systems based upon the reconfigurability of our programmable logic devices. We explore our programmable logic architectures and their implications for programmable hardware. One programmable hardware board implementation is detailed.
Microcomputers in the Learning Center: Guidelines for Avoiding Costly Mistakes.
ERIC Educational Resources Information Center
Belair, Charles
Guidelines for the planning and purchase of microcomputer systems are presented in outline form. These guidelines are directed to educators in developmental education, special education, and learning center environments; however, they are applicable to all educators intending to use microcomputers. First, a discussion of hardware considerations is…
ERIC Educational Resources Information Center
Clement, Russell T.
1985-01-01
Reports results of survey of 21 smaller to medium-sized (50,000-200,000 volumes) libraries which was conducted to determine factors used to make selection decisions in purchase of automated turnkey systems from established vendors. Factors examined include costs, software, hardware, and vendors for parts A (actual experience) and B (hypothetical…
Laser Printing for a Variety of Library Applications.
ERIC Educational Resources Information Center
Kelly, Glen J.
1988-01-01
Summarizes the current status of laser printers in terms of cost, hardware and software requirements, measurement and operational considerations, ease of use, and maintenance. The cost effectiveness of laser printing in libraries for applications such as spine labels, purchase orders, and reports, is explored. (9 notes with references) (CLB)
Questions to Answer before You Branch out on a CD-ROM Network.
ERIC Educational Resources Information Center
Simpson, Carol Mann
1992-01-01
Examines issues that librarians must address when purchasing databases on CD-ROM for networking. Highlights include network licenses; costs; restrictions on network rights; ownership of CD-ROMs; hardware requirements; fees for upgrading software; CD-ROM servers; pricing options; training materials; and disk drives. (LRW)
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
By this time of the year, springtime rituals are blossoming like begonias, and that's true for higher education, too. Students move inexorably toward the end of another year; professors get ready for summer session; and in campus technology departments, CIOs and other decision-makers furiously set their plans to purchase hardware and software for…
Computer technology forecast study for general aviation
NASA Technical Reports Server (NTRS)
Seacord, C. L.; Vaughn, D.
1976-01-01
A multi-year, multi-faceted program is underway to investigate and develop potential improvements in airframes, engines, and avionics for general aviation aircraft. The objective of this study was to assemble information that will allow the government to assess the trends in computer and computer/operator interface technology that may have application to general aviation in the 1980's and beyond. The current state of the art of computer hardware is assessed, technical developments in computer hardware are predicted, and nonaviation large volume users of computer hardware are identified.
Computational System For Rapid CFD Analysis In Engineering
NASA Technical Reports Server (NTRS)
Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.
1995-01-01
Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.
A simple approach to a vision-guided unmanned vehicle
NASA Astrophysics Data System (ADS)
Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye
2005-10-01
This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... receivers, computer hardware for electronic fish ticket software and computer hardware for electronic logbook software. (b) Performance and technical requirements for scales used to weigh catch at sea... ticket software provided by Pacific States Marine Fish Commission are required to meet the hardware and...
ERIC Educational Resources Information Center
Erdogan, Yavuz
2009-01-01
The purpose of this paper is to compare the effects of paper-based and computer-based concept mappings on computer hardware achievement, computer anxiety and computer attitude of the eight grade secondary school students. The students were randomly allocated to three groups and were given instruction on computer hardware. The teaching methods used…
Speed challenge: a case for hardware implementation in soft-computing
NASA Technical Reports Server (NTRS)
Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.
2000-01-01
For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.
Estimating the Reliability of the CITAR Computer Courseware Evaluation System.
ERIC Educational Resources Information Center
Micceri, Theodore
In today's complex computer-based teaching (CBT)/computer-assisted instruction market, flashy presentations frequently prove the most important purchasing element, while instructional design and content are secondary to form. Courseware purchasers must base decisions upon either a vendor's presentation or some published evaluator rating.…
How Easy Is It for Adult Educators To Use the Information Superhighway?
ERIC Educational Resources Information Center
Rosen, David J.
In November 1995, an online survey was conducted of 113 adult literacy practitioners who were actively using the Internet. Respondents reported the following difficulties encountered in learning to use the Internet: purchasing and learning to use hardware or software; getting access to a telephone line; getting an Internet account; learning…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... Corporation, a Subsidiary of Spectrum Brands, Formerly Known as a Subsidiary of Stanley Black & Decker... workers of the subject firm. New information shows that on December 17, 2012, Spectrum Brands purchased... Spectrum Brands, formerly known as a Subsidiary of Stanley Black & Decker. Some workers separated from...
Hardware architecture design of image restoration based on time-frequency domain computation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
Hybrid Architectures for Evolutionary Computing Algorithms
2008-01-01
other EC algorithms to FPGA Core Burns P1026/MAPLD 200532 Genetic Algorithm Hardware References S. Scott, A. Samal , and S. Seth, “HGA: A Hardware Based...on Parallel and Distributed Processing (IPPS/SPDP ), pp. 316-320, Proceedings. IEEE Computer Society 1998. [12] Scott, S. D. , Samal , A., and...Algorithm Hardware References S. Scott, A. Samal , and S. Seth, “HGA: A Hardware Based Genetic Algorithm”, Proceedings of the 1995 ACM Third
Electronic processing and control system with programmable hardware
NASA Technical Reports Server (NTRS)
Alkalaj, Leon (Inventor); Fang, Wai-Chi (Inventor); Newell, Michael A. (Inventor)
1998-01-01
A computer system with reprogrammable hardware allowing dynamically allocating hardware resources for different functions and adaptability for different processors and different operating platforms. All hardware resources are physically partitioned into system-user hardware and application-user hardware depending on the specific operation requirements. A reprogrammable interface preferably interconnects the system-user hardware and application-user hardware.
Hardware packet pacing using a DMA in a parallel computer
Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos
2013-08-13
Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.
A Survey of Display Hardware and Software.
ERIC Educational Resources Information Center
Poore, Jesse H., Jr.; And Others
Reported are two papers which deal with the fundamentals of display hardware and software in computer systems. The first report presents the basic principles of display hardware in terms of image generation from buffers presumed to be loaded and controlled by a digital computer. The concepts surrounding the electrostatic tube, the electromagnetic…
NASA Astrophysics Data System (ADS)
Protheroe, Mark; Sloggett, David R.; Sieber, Alois J.
1994-12-01
Traditionally, the production of high quality Synthetic Aperture Radar imagery has been an area where a potential user would have to expend large amounts of money in either the bespoke development of a processing chain dedicated to his requirements or in the purchase of a dedicated hardware platform adapted using accelerator boards and enhanced memory management. Whichever option the user adopted there were limitations based on the desire for a realistic throughput in data load and time. The user had a choice, made early in the purchase, for either a system that adopted innovative algorithmic manipulation, to limit the processing time of the purchase of expensive hardware. The former limits the quality of the product, while the latter excludes the user from any visibility into the processing chain. Clearly there was a need for a SAR processing architecture that gave the user a choice into the methodology to be adopted for a particular processing sequence, allowing him to decide on either a quick (lower quality) product or a detailed slower (high quality) product, without having to change the algorithmic base of his processor or the hardware platform. The European Commission, through the Advanced Techniques unit of the Joint Research Centre (JRC) Institute for Remote Sensing at Ispra in Italy, realizing the limitations on current processing abilities, initiated its own program to build airborne SAR and Electro-Optical (EO) sensor systems. This program is called the European Airborne Remote Sensing Capabilities (EARSEC) program. This paper describes the processing system developed for the airborne SAR sensor system. The paper considers the requirements for the system and the design of the EARSEC Airborne SAR Processing System. It highlights the development of an open SAR processing architecture where users have full access to intermediate products that arise from each of the major processing stages. It also describes the main processing stages in the overall architecture and illustrates the results of each of the key stages in the processor.
NASA Technical Reports Server (NTRS)
Schulte, Erin
2017-01-01
As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.
ERIC Educational Resources Information Center
Wodarz, Nan
1997-01-01
Explores how to avoid common pitfalls when schools purchase computer equipment. Purchasing tips are provided in the areas of choosing multiple platforms, buying the cheapest model available, choosing a proprietary design, falling for untested technology, purchasing systems that are not upgradable, ignoring extended warranties, and failing to plan…
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
From Here to Technology. How To Fund Hardware, Software, and More.
ERIC Educational Resources Information Center
Hunter, Barbara M.
Faced with shrinking state and local tax support and an increased demand for K-12 educational reform, school leaders must use creative means to find money to improve and deliver instruction and services to their schools. This handbook describes innovative strategies that school leaders have used to find scarce dollars for purchasing educational…
Networking K-12 Schools: Architecture Models and Evaluation of Costs and Benefits.
ERIC Educational Resources Information Center
Rothstein, Russell Isaac
This thesis examines the cost and benefits of communication networks in K-12 schools using cost analysis of five technology models with increasing levels of connectivity. Data indicate that the cost of the network hardware is only a small fraction of the overall networking costs. PC purchases, initial training, and retrofitting are the largest…
A NASA family of minicomputer systems, Appendix A
NASA Technical Reports Server (NTRS)
Deregt, M. P.; Dulfer, J. E.
1972-01-01
This investigation was undertaken to establish sufficient specifications, or standards, for minicomputer hardware and software to provide NASA with realizable economics in quantity purchases, interchangeability of minicomputers, software, storage and peripherals, and a uniformly high quality. The standards will define minicomputer system component types, each specialized to its intended NASA application, in as many levels of capacity as required.
Library iTour: Introducing the iPod Generation to the Academic Library
ERIC Educational Resources Information Center
Cairns, Virginia; Dean, Toni C.
2009-01-01
Purpose: For many years, the Lupton Library offered a traditional library introduction class to first year students participating in the Freshman Seminar Program at the University of Tennessee at Chattanooga. In 2007, the library applied for and received a campus grant to purchase thirty iPod Touches, along with accompanying hardware and software.…
BlueSky Cloud - rapid infrastructure capacity using Amazon's Cloud for wildfire emergency response
NASA Astrophysics Data System (ADS)
Haderman, M.; Larkin, N. K.; Beach, M.; Cavallaro, A. M.; Stilley, J. C.; DeWinter, J. L.; Craig, K. J.; Raffuse, S. M.
2013-12-01
During peak fire season in the United States, many large wildfires often burn simultaneously across the country. Smoke from these fires can produce air quality emergencies. It is vital that incident commanders, air quality agencies, and public health officials have smoke impact information at their fingertips for evaluating where fires and smoke are and where the smoke will go next. To address the need for this kind of information, the U.S. Forest Service AirFire Team created the BlueSky Framework, a modeling system that predicts concentrations of particle pollution from wildfires. During emergency response, decision makers use BlueSky predictions to make public outreach and evacuation decisions. The models used in BlueSky predictions are computationally intensive, and the peak fire season requires significantly more computer resources than off-peak times. Purchasing enough hardware to run the number of BlueSky Framework runs that are needed during fire season is expensive and leaves idle servers running the majority of the year. The AirFire Team and STI developed BlueSky Cloud to take advantage of Amazon's virtual servers hosted in the cloud. With BlueSky Cloud, as demand increases and decreases, servers can be easily spun up and spun down at a minimal cost. Moving standard BlueSky Framework runs into the Amazon Cloud made it possible for the AirFire Team to rapidly increase the number of BlueSky Framework instances that could be run simultaneously without the costs associated with purchasing and managing servers. In this presentation, we provide an overview of the features of BlueSky Cloud, describe how the system uses Amazon Cloud, and discuss the costs and benefits of moving from privately hosted servers to a cloud-based infrastructure.
Locating hardware faults in a data communications network of a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-01-12
Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.
ERIC Educational Resources Information Center
Hansen, David L.; Morgan, Robert L.
2008-01-01
This research evaluated effects of a multi-media computer-based instruction (CBI) program designed to teach grocery store purchasing skills to three high-school students with intellectual disabilities. A multiple baseline design across participants used measures of computer performance mastery and grocery store probes to evaluate the CBI. All…
Commonsense System Pricing; Or, How Much Will that $1,200 Computer Really Cost?
ERIC Educational Resources Information Center
Crawford, Walt
1984-01-01
Three methods employed to price and sell computer equipment are discussed: computer pricing, hardware pricing, system pricing (system includes complete computer and support hardware system and relatively complete software package). Advantages of system pricing are detailed, the author's system is described, and 10 systems currently available are…
OS friendly microprocessor architecture: Hardware level computer security
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; La Fratta, Patrick
2016-05-01
We present an introduction to the patented OS Friendly Microprocessor Architecture (OSFA) and hardware level computer security. Conventional microprocessors have not tried to balance hardware performance and OS performance at the same time. Conventional microprocessors have depended on the Operating System for computer security and information assurance. The goal of the OS Friendly Architecture is to provide a high performance and secure microprocessor and OS system. We are interested in cyber security, information technology (IT), and SCADA control professionals reviewing the hardware level security features. The OS Friendly Architecture is a switched set of cache memory banks in a pipeline configuration. For light-weight threads, the memory pipeline configuration provides near instantaneous context switching times. The pipelining and parallelism provided by the cache memory pipeline provides for background cache read and write operations while the microprocessor's execution pipeline is running instructions. The cache bank selection controllers provide arbitration to prevent the memory pipeline and microprocessor's execution pipeline from accessing the same cache bank at the same time. This separation allows the cache memory pages to transfer to and from level 1 (L1) caching while the microprocessor pipeline is executing instructions. Computer security operations are implemented in hardware. By extending Unix file permissions bits to each cache memory bank and memory address, the OSFA provides hardware level computer security.
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Distributed Hybrid Information and Plan Consensus HIPC for Semi-autonomous UAV Teams
2015-09-18
finalized. To do all of the onboard computations we are using Raspberry Pi B+’s (this hardware as shown in Fig. 16.) These computers are used to do all...public release. Figure 16: Raspberry Pi hardware Figure 17: Raspberry Pi hardware with case and DigiMesh Xbee Figure 18: Team of 11 Raspberry Pi powered...agents with Digimesh Xbee communication hardware. DISTRIBUTION A: Distribution approved for public release. Figure 19: Raspberry Pi network in real
Simulation training tools for nonlethal weapons using gaming environments
NASA Astrophysics Data System (ADS)
Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry
2006-05-01
Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.
NASA Astrophysics Data System (ADS)
Cork, Chris; Lugg, Robert; Chacko, Manoj; Levi, Shimon
2005-06-01
With the exponential increase in output database size due to the aggressive optical proximity correction (OPC) and resolution enhancement technique (RET) required for deep sub-wavelength process nodes, the CPU time required for mask tape-out continues to increase significantly. For integrated device manufacturers (IDMs), this can impact the time-to-market for their products where even a few days delay could have a huge commercial impact and loss of market window opportunity. For foundries, a shorter turnaround time provides a competitive advantage in their demanding market, too slow could mean customers looking elsewhere for these services; while a fast turnaround may even command a higher price. With FAB turnaround of a mature, plain-vanilla CMOS process of around 20-30 days, a delay of several days in mask tapeout would contribute a significant fraction to the total time to deliver prototypes. Unlike silicon processing, masks tape-out time can be decreased by simply purchasing extra computing resources and software licenses. Mask tape-out groups are taking advantage of the ever-decreasing hardware cost and increasing power of commodity processors. The significant distributability inherent in some commercial Mask Synthesis software can be leveraged to address this critical business issue. Different implementations have different fractions of the code that cannot be parallelized and this affects the efficiency with which it scales, as is described by Amdahl"s law. Very few are efficient enough to allow the effective use of 1000"s of processors, enabling run times to drop from days to only minutes. What follows is a cost aware methodology to quantify the scalability of this class of software, and thus act as a guide to estimating the optimal investment in terms of hardware and software licenses.
Computer-aided design and computer science technology
NASA Technical Reports Server (NTRS)
Fulton, R. E.; Voigt, S. J.
1976-01-01
A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
2012-01-01
Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725
Affordable Emerging Computer Hardware for Neuromorphic Computing Applications
2011-09-01
DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation
Trends in computer hardware and software.
Frankenfeld, F M
1993-04-01
Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
Projected Defense Purchases: Detail by Industry and State Calendar Years 2000 Through 2005
2000-07-01
2087 14 Other Food Products 44 Vegetable Oil Mills 2074, 2075, 2076 45 Animal and Marine Fats and Oils 2077 46 Shortening, Table Oils, and Edible Fats...Stampings 3465 155 Crowns and Closures 3466 156 Metal Stampings, n.e.c. 3469 157 Cutlery and Hand Tools 3421 3423 3425 158 Hardware, n.e.c. 3429 159 Metal
Comparison of the LEGO Mindstorms NXT and EV3 Robotics Education Platforms
ERIC Educational Resources Information Center
Sherrard, Ann; Rhodes, Amy
2014-01-01
The release of the latest LEGO Mindstorms EV3 robotics platform in September 2013 has provided a dilemma for many youth robotics leaders. There is a need to understand the differences in the Mindstorms NXT and EV3 in order to make future robotics purchases. In this article the differences are identified regarding software, hardware, sensors, the…
Noonan, P
1991-01-01
As always, you'll have to fight for the dollars to buy systems, competing with departments who produce revenue. Financial managers and hospital boards respond most favorably to a good return on investment (ROI) presentation. Join forces with your software vendor of choice and give your board the best ROI argument they'll hear this year. Keep two things in mind: 1) today's innovations, particularly in reducing inventory, interfacing and lower hardware costs will help you make your case and 2) make sure the system is growing consistent with the industry to ensure you won't be asking for a similar purchase three years from now.
ERIC Educational Resources Information Center
Rourke, Martha; Rourke, Patrick
1974-01-01
The school district business manager can make sound, cost-conscious decisions in the purchase of computer equipment by developing a list of cost-justified applications for automation, considering the software, writing performance specifications for bidding or negotiating a contract, and choosing the vendor wisely prior to the purchase; and by…
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
NASA Technical Reports Server (NTRS)
Keltner, D. J.
1975-01-01
The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.
ERIC Educational Resources Information Center
Mechling, Linda C.; Pridgen, Leslie S.; Cronin, Beth A.
2005-01-01
Computer-based video instruction (CBVI) was used to teach verbal responses to questions presented by cashiers and purchasing skills in fast food restaurants. A multiple probe design across participants was used to evaluate the effectiveness of CBVI. Instruction occurred through simulations of three fast food restaurants on the computer using video…
Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy
Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern
2011-01-01
This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688
Development of simulation computer complex specification
NASA Technical Reports Server (NTRS)
1973-01-01
The Training Simulation Computer Complex Study was one of three studies contracted in support of preparations for procurement of a shuttle mission simulator for shuttle crew training. The subject study was concerned with definition of the software loads to be imposed on the computer complex to be associated with the shuttle mission simulator and the development of procurement specifications based on the resulting computer requirements. These procurement specifications cover the computer hardware and system software as well as the data conversion equipment required to interface the computer to the simulator hardware. The development of the necessary hardware and software specifications required the execution of a number of related tasks which included, (1) simulation software sizing, (2) computer requirements definition, (3) data conversion equipment requirements definition, (4) system software requirements definition, (5) a simulation management plan, (6) a background survey, and (7) preparation of the specifications.
The Administrative Use of Microcomputers. Technical Report.
ERIC Educational Resources Information Center
Alabama Univ., University. Coll. of Education.
Citing the growing interest in using microcomputers as an aid in educational administration, this report discusses factors that must be considered when purchasing a computer and describes an actual case of computer implementation for administrative purposes. The first steps in the purchasing process are to assess and to prioritize the…
Distributed computing environments for future space control systems
NASA Technical Reports Server (NTRS)
Viallefont, Pierre
1993-01-01
The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.
76 FR 9349 - Jim Woodruff Project
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-17
... month. Southeastern would compute its purchased power obligation for each delivery point monthly... rates to include a pass-through of purchased power expenses. The capacity and energy charges to preference customers can be reduced because purchased power expenses will be recovered in a separate, pass...
The Infrared Automatic Mass Screening (IRAMS) System For Printed Circuit Board Fault Detection
NASA Astrophysics Data System (ADS)
Hugo, Perry W.
1987-05-01
Office of the Program Manager for TMDE (OPM TMDE) has initiated a program to develop techniques for evaluating the performance of printed circuit boards (PCB's) using infrared thermal imaging. It is OPM TMDE's expectation that the standard thermal profile (STP) will become the basis for the future rapid automatic detection and isolation of gross failure mechanisms on units under test (UUT's). To accomplish this OPM TMDE has purchased two Infrared Automatic Mass Screening ( I RAMS) systems which are scheduled for delivery in 1987. The IRAMS system combines a high resolution infrared thermal imager with a test bench and diagnostic computer hardware and software. Its purpose is to rapidly and automatically compare the thermal profiles of a UUT with the STP of that unit, recalled from memory, in order to detect thermally responsive failure mechanisms in PCB's. This paper will review the IRAMS performance requirements, outline the plan for implementing the two systems and report on progress to date.
Oppenheimer, David H.
2000-01-01
In 1999 the Northern California Seismic Network (NCSN) purchased a Libra satellite seismograph system from Nanometrics, Inc to assess whether this technology was a cost-effective and robust replacement for their analog microwave system. The system was purchased subject to it meeting the requirements, criteria and tests described in Appendix A. In early 2000, Nanometrics began delivery of various components of the system, such as the hub and remote satellite dish and mounting hardware, and the NCSN installed and assembled most equipment in advance of the arrival of Nanometrics engineers to facilitate the configuration of the system. The hub was installed in its permanent location, but for logistical reasons the "remote" satellite hardware was initially configured at the NCSN for testing. During the first week of April Nanometrics engineers came to Menlo Park to configure the system and train NCSN staff. The two dishes were aligned with the satellite, and the system was fully operational in 2 days with little problem. Nanometrics engineers spent the remaining 3 days providing hands-on training to NCSN staff in hardware/software operation, configuration, and maintenance. During the second week of April 2000, NCSN staff moved the entire remote system of digitizers, dish assembly, and mounting hardware to Mammoth Lakes, California. The system was reinstalled at the Mammoth Lakes water treatment plant and communications successfully reestablished with the hub via the satellite on 14 April 2000. The system has been in continuous operation since then. This report reviews the performance of the Libra system for the three-month period 20 April 2000 through 20 July 2000. The purpose of the report is to assess whether the system passed the acceptance tests described in Appendix A. We examine all data gaps reported by NCSN "gap list" software and discuss their cause.
Performance Prediction Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen
The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less
Practical Applications of Data Processing to School Purchasing.
ERIC Educational Resources Information Center
California Association of School Business Officials, San Diego. Imperial Section.
Electronic data processing provides a fast and accurate system for handling large volumes of routine data. If properly employed, computers can perform myriad functions for purchasing operations, including purchase order writing; equipment inventory control; vendor inventory; and equipment acquisition, transfer, and retirement. The advantages of…
ERIC Educational Resources Information Center
Breetzke, Gregory; Eksteen, Sanet; Pretorius, Erika
2011-01-01
Geographical information systems (GIS) were phased into the geography curriculum of South African schools from 2006-2008 as part of the National Curriculum Statement (NCS) for grades 10-12. Since its introduction, GIS education in schools across the country has been met with a number of challenges including the cost of purchasing the hardware and…
Data Mining and Homeland Security: An Overview
2006-01-27
which government agencies should use and mix commercial data with government data, whether data sources are being used for purposes other than those...example, a hardware store may compare their customers’ tool purchases with home ownership, type of CRS-2 3 John Makulowich, “ Government Data Mining...cleaning, data integration, data selection, data transformation , (data mining), pattern evaluation, and knowledge presentation.4 A number of advances in
On-Chip Hardware for Cell Monitoring: Contact Imaging and Notch Filtering
2005-07-07
a polymer carrier. Spectrophotometer chosen and purchased for testing optical filters and materials. Characterization and comparison of fabricated...reproducibility of behavior. Multi-level SU8 process developed. Optimization of actuator for closing vial lids and development of lid sealing technology is...bending angles characterized as a function of temperature in NaDBS solution. " Photopatternable polymers are a viable interim packaging solution; through
Using a software-defined computer in teaching the basics of computer architecture and operation
NASA Astrophysics Data System (ADS)
Kosowska, Julia; Mazur, Grzegorz
2017-08-01
The paper describes the concept and implementation of SDC_One software-defined computer designed for experimental and didactic purposes. Equipped with extensive hardware monitoring mechanisms, the device enables the students to monitor the computer's operation on bus transfer cycle or instruction cycle basis, providing the practical illustration of basic aspects of computer's operation. In the paper, we describe the hardware monitoring capabilities of SDC_One and some scenarios of using it in teaching the basics of computer architecture and microprocessor operation.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
Buying Your Next (or First) PC: What Matters Now?
ERIC Educational Resources Information Center
Crawford, Walt
1993-01-01
Discussion of factors to consider in purchasing a personal computer covers present and future needs, computing environments, memory, processing performance, disk size, and display quality. Issues such as bundled systems, where and when to purchase, and vendor support are addressed; and an annotated bibliography of 28 recent articles is included.…
The Higher-Education Market: Patterns of Responsibility, Purchasing, & Influence. A Market Study.
ERIC Educational Resources Information Center
John Minter Associates, Inc., Boulder, CO.
Roles played by the executive team in higher education, including their involvement in planning, purchasing, computer use on campus, and management, were surveyed. Tables and summaries address the roles of chief executive officers, chief academic officers, chief business officers, department heads, and computer center directors, as well as how…
Scalable digital hardware for a trapped ion quantum computer
NASA Astrophysics Data System (ADS)
Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang
2016-12-01
Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busbey, A.B.
A number of methods and products, both hardware and software, to allow data exchange between Apple Macintosh computers and MS-DOS based systems. These included serial null modem connections, MS-DOS hardware and/or software emulation, MS-DOS disk-reading hardware and networking.
Shorebird Migration Patterns in Response to Climate Change: A Modeling Approach
NASA Technical Reports Server (NTRS)
Smith, James A.
2010-01-01
The availability of satellite remote sensing observations at multiple spatial and temporal scales, coupled with advances in climate modeling and information technologies offer new opportunities for the application of mechanistic models to predict how continental scale bird migration patterns may change in response to environmental change. In earlier studies, we explored the phenotypic plasticity of a migratory population of Pectoral sandpipers by simulating the movement patterns of an ensemble of 10,000 individual birds in response to changes in stopover locations as an indicator of the impacts of wetland loss and inter-annual variability on the fitness of migratory shorebirds. We used an individual based, biophysical migration model, driven by remotely sensed land surface data, climate data, and biological field data. Mean stop-over durations and stop-over frequency with latitude predicted from our model for nominal cases were consistent with results reported in the literature and available field data. In this study, we take advantage of new computing capabilities enabled by recent GP-GPU computing paradigms and commodity hardware (general purchase computing on graphics processing units). Several aspects of our individual based (agent modeling) approach lend themselves well to GP-GPU computing. We have been able to allocate compute-intensive tasks to the graphics processing units, and now simulate ensembles of 400,000 birds at varying spatial resolutions along the central North American flyway. We are incorporating additional, species specific, mechanistic processes to better reflect the processes underlying bird phenotypic plasticity responses to different climate change scenarios in the central U.S.
Costs of cloud computing for a biometry department. A case study.
Knaus, J; Hieke, S; Binder, H; Schwarzer, G
2013-01-01
"Cloud" computing providers, such as the Amazon Web Services (AWS), offer stable and scalable computational resources based on hardware virtualization, with short, usually hourly, billing periods. The idea of pay-as-you-use seems appealing for biometry research units which have only limited access to university or corporate data center resources or grids. This case study compares the costs of an existing heterogeneous on-site hardware pool in a Medical Biometry and Statistics department to a comparable AWS offer. The "total cost of ownership", including all direct costs, is determined for the on-site hardware, and hourly prices are derived, based on actual system utilization during the year 2011. Indirect costs, which are difficult to quantify are not included in this comparison, but nevertheless some rough guidance from our experience is given. To indicate the scale of costs for a methodological research project, a simulation study of a permutation-based statistical approach is performed using AWS and on-site hardware. In the presented case, with a system utilization of 25-30 percent and 3-5-year amortization, on-site hardware can result in smaller costs, compared to hourly rental in the cloud dependent on the instance chosen. Renting cloud instances with sufficient main memory is a deciding factor in this comparison. Costs for on-site hardware may vary, depending on the specific infrastructure at a research unit, but have only moderate impact on the overall comparison and subsequent decision for obtaining affordable scientific computing resources. Overall utilization has a much stronger impact as it determines the actual computing hours needed per year. Taking this into ac count, cloud computing might still be a viable option for projects with limited maturity, or as a supplement for short peaks in demand.
Real-time orthorectification by FPGA-based hardware acceleration
NASA Astrophysics Data System (ADS)
Kuo, David; Gordon, Don
2010-10-01
Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive value in the long processing cycle. However, the computation on each pixel can be reduced substantially by using computational results of the neighboring pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture. The optimal partition between software and hardware, the timing profile among image I/O and computation, and the highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for their orthorectified imagery production will be presented. The potential candidacy of the image processing computation that can be accelerated more efficiently by the same approach will also be analyzed.
Fault tolerance in a supercomputer through dynamic repartitioning
Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Takken, Todd E.
2007-02-27
A multiprocessor, parallel computer is made tolerant to hardware failures by providing extra groups of redundant standby processors and by designing the system so that these extra groups of processors can be swapped with any group which experiences a hardware failure. This swapping can be under software control, thereby permitting the entire computer to sustain a hardware failure but, after swapping in the standby processors, to still appear to software as a pristine, fully functioning system.
Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M
2016-01-01
The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.
Hardware Acceleration of Adaptive Neural Algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less
Trainable hardware for dynamical computing using error backpropagation through physical media.
Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-24
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
Trainable hardware for dynamical computing using error backpropagation through physical media
NASA Astrophysics Data System (ADS)
Hermans, Michiel; Burm, Michaël; van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-01
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation—a crucial step for tuning such systems towards a specific task—can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
Spacelab dedicated discipline laboratory (DDL) utilization concept
NASA Technical Reports Server (NTRS)
Wunsch, P.; De Sanctis, C.
1984-01-01
The dedicated discipline laboratory (DDL) concept is a new approach for implementing Spacelab missions that involves the grouping of science instruments into mission complements of single or compatible disciplines. These complements are evolved in such a way that the DDL payloads can be left intact between flights. This requires the dedication of flight hardware to specific payloads on a long-term basis and raises the concern that the purchase of additional flight hardware will be required to implement the DDL program. However, the payoff is expected to result in significant savings in mission engineering and assembly effort. A study has been conducted recently to quantify both the requirements for new hardware and the projected mission cost savings. It was found that some incremental additions to the current inventory will be needed to fly the mission model assumed. Cost savings of $2M to 6.5M per mission were projected in areas analyzed in depth, and additional savings may occur in areas for which detailed cost data were not available.
2014-03-27
Fault Detection and Isolation GUI Graphical User Interface IGRF International Geomagnetic Reference Field IMU Inertial Measurement Unit IR infrared xv...ADCS hardware components were either commercially purchased or built in-house and include an Inertial Measurement Unit ( IMU ), external magnetometer, 4...3.2.1.3 IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2.1.4 External Magnetometer . . . . . . . . . . . . . . . . . . 48 3.2.2
Environmentally Sustainable Yellow Smoke Formulations for Use in the M194 Hand Held Signal
2012-06-10
of the pyrotechnic train through the system hardware, potassium chlorate (oxidizer) and the sugar (fuel, or reducing agent) engage in a reduction...1: Composition of M194 control formulation Ingredient Wt. % Function Potassium Chlorate 35.0 Oxidizer Sugar (sucrose) 20.0 Fuel Vat yellow...Experimental Section 1. Materials Potassium chlorate , potassium nitrate, sodium bicarbonate, stearic acid, and sugar were purchased from Hummel
Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cale, James; Johnson, Brian; Dall'Anese, Emiliano
Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.
Mitigating Communication Delays in Remotely Connected Hardware-in-the-loop Experiments
Cale, James; Johnson, Brian; Dall'Anese, Emiliano; ...
2018-03-30
Here, this paper introduces a potential approach for mitigating the effects of communication delays between multiple, closed-loop hardware-in-the-loop experiments which are virtually connected, yet physically separated. The method consists of an analytical method for the compensation of communication delays, along with the supporting computational and communication infrastructure. The control design leverages tools for the design of observers for the compensation of measurement errors in systems with time-varying delays. The proposed methodology is validated through computer simulation and hardware experimentation connecting hardware-in-the-loop experiments conducted between laboratories separated by a distance of over 100 km.
Accelerating artificial intelligence with reconfigurable computing
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw
Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.
ERIC Educational Resources Information Center
Gardill, M. Cathleen; Browder, Diane M.
1995-01-01
Three students (ages 12 and 13) with developmental disabilities and severe behavior disorders were taught independent purchasing skills. To bypass the need for money computation skills, discrimination between three stimulus classes of frequent purchases (vending machine snack, convenience store snack, lunch) were trained. Two students mastered the…
26 CFR 1.863-3 - Allocation and apportionment of income from certain sales of inventory.
Code of Federal Regulations, 2010 CFR
2010-04-01
... income from sources within and without the United States determined under the 50/50 method. Research and... Possession Purchase Sales—(A) Business activity method. Gross income from Possession Purchase Sales is... from Possession Purchase Sales computed under the business activity method, the amounts of expenses...
The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments
NASA Technical Reports Server (NTRS)
Budney, T. J.; Stone, R. W.
1982-01-01
Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.
Research in nonlinear structural and solid mechanics
NASA Technical Reports Server (NTRS)
Mccomb, H. G., Jr. (Compiler); Noor, A. K. (Compiler)
1981-01-01
Recent and projected advances in applied mechanics, numerical analysis, computer hardware and engineering software, and their impact on modeling and solution techniques in nonlinear structural and solid mechanics are discussed. The fields covered are rapidly changing and are strongly impacted by current and projected advances in computer hardware. To foster effective development of the technology perceptions on computing systems and nonlinear analysis software systems are presented.
Chemical calculations on Cray computers
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.
1989-01-01
The influence of recent developments in supercomputing on computational chemistry is discussed with particular reference to Cray computers and their pipelined vector/limited parallel architectures. After reviewing Cray hardware and software the performance of different elementary program structures are examined, and effective methods for improving program performance are outlined. The computational strategies appropriate for obtaining optimum performance in applications to quantum chemistry and dynamics are discussed. Finally, some discussion is given of new developments and future hardware and software improvements.
26 CFR 301.7216-1 - Penalty for disclosure or use of tax return information.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., including a person who develops software that is used to prepare or file a tax return and any Authorized IRS... purchases computer software designed to assist with the preparation and filing of her income tax return. When A loads the software onto her computer, it prompts her to register her purchase of the software...
26 CFR 301.7216-1 - Penalty for disclosure or use of tax return information.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., including a person who develops software that is used to prepare or file a tax return and any Authorized IRS... purchases computer software designed to assist with the preparation and filing of her income tax return. When A loads the software onto her computer, it prompts her to register her purchase of the software...
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
Proceedings, Conference on the Computing Environment for Mathematical Software
NASA Technical Reports Server (NTRS)
1981-01-01
Recent advances in software and hardware technology which make it economical to create computing environments appropriate for specialized applications are addressed. Topics included software tools, FORTRAN standards activity, and features of languages, operating systems, and hardware that are important for the development, testing, and maintenance of mathematical software.
ERIC Educational Resources Information Center
Chandramouli, Magesh; Chittamuru, Siva-Teja
2016-01-01
This paper explains the design of a graphics-based virtual environment for instructing computer hardware concepts to students, especially those at the beginner level. Photorealistic visualizations and simulations are designed and programmed with interactive features allowing students to practice, explore, and test themselves on computer hardware…
9 CFR 205.101 - Certification-request and processing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... required by subsection (c)(2)(F) (details of computer hardware and software need not be furnished but the... of computer hardware and software need not be furnished but the results it will produce must be..., and requirements issued under such legislation or other legal authority and governing operation of the...
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Towards fully analog hardware reservoir computing for speech recognition
NASA Astrophysics Data System (ADS)
Smerieri, Anteo; Duport, François; Paquot, Yvan; Haelterman, Marc; Schrauwen, Benjamin; Massar, Serge
2012-09-01
Reservoir computing is a very recent, neural network inspired unconventional computation technique, where a recurrent nonlinear system is used in conjunction with a linear readout to perform complex calculations, leveraging its inherent internal dynamics. In this paper we show the operation of an optoelectronic reservoir computer in which both the nonlinear recurrent part and the readout layer are implemented in hardware for a speech recognition application. The performance obtained is close to the one of to state-of-the-art digital reservoirs, while the analog architecture opens the way to ultrafast computation.
Dynamically allocating sets of fine-grained processors to running computations
NASA Technical Reports Server (NTRS)
Middleton, David
1988-01-01
Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.
Performance/price estimates for cortex-scale hardware: a design space exploration.
Zaveri, Mazad S; Hammerstrom, Dan
2011-04-01
In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.
Digital avionics design and reliability analyzer
NASA Technical Reports Server (NTRS)
1981-01-01
The description and specifications for a digital avionics design and reliability analyzer are given. Its basic function is to provide for the simulation and emulation of the various fault-tolerant digital avionic computer designs that are developed. It has been established that hardware emulation at the gate-level will be utilized. The primary benefit of emulation to reliability analysis is the fact that it provides the capability to model a system at a very detailed level. Emulation allows the direct insertion of faults into the system, rather than waiting for actual hardware failures to occur. This allows for controlled and accelerated testing of system reaction to hardware failures. There is a trade study which leads to the decision to specify a two-machine system, including an emulation computer connected to a general-purpose computer. There is also an evaluation of potential computers to serve as the emulation computer.
Hardware realization of an SVM algorithm implemented in FPGAs
NASA Astrophysics Data System (ADS)
Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł
2017-08-01
The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.
12 CFR 226.5a - Credit and charge card applications and solicitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... used to compute the finance charge on an outstanding balance for purchases, a cash advance, or a... applicable shall also be disclosed. The annual percentage rate for purchases disclosed pursuant to this... for the use of the card for purchases. (5) Grace period. The date by which or the period within which...
NASA Technical Reports Server (NTRS)
1973-01-01
Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.
Interfacing laboratory instruments to multiuser, virtual memory computers
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Stang, David B.; Roth, Don J.
1989-01-01
Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Choosing a Microcomputer: A Success Story.
ERIC Educational Resources Information Center
Mason, Robert M.
1983-01-01
Documentation of author's personal experience in the purchasing of a microcomputer discusses background learning, the purchase decision, needs assessment, computer literacy, general information on microcomputers, the situation assessment, and the final check. (EJS)
HullBUG Technology Development for Underwater Hull Cleaning
2014-01-15
Grooming Tool would be generated. These drawings would be vended out to approved vendors for quoting. Quotes would be obtained for different levels...determined, parts would be vended out and manufactured. These parts would then be assembled at SRC. A test program would then follow to fully...identified all machined parts as well as all purchased hardware Top Assembly Based on total cost, a decision was made to make 1 grooming tool and 1
Hamlet, Jason R; Bauer, Todd M; Pierson, Lyndon G
2014-09-30
Deterrence of device subversion by substitution may be achieved by including a cryptographic fingerprint unit within a computing device for authenticating a hardware platform of the computing device. The cryptographic fingerprint unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware platform. The PUF circuit is used to generate a PUF value. A key generator is coupled to generate a private key and a public key based on the PUF value while a decryptor is coupled to receive an authentication challenge posed to the computing device and encrypted with the public key and coupled to output a response to the authentication challenge decrypted with the private key.
NASA Astrophysics Data System (ADS)
Zaveri, Mazad Shaheriar
The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut
2015-01-01
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.
Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut
2015-10-05
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
ERIC Educational Resources Information Center
Goo, Minkowan; Therrien, William J.; Hua, Youjia
2016-01-01
The purpose of this study was to evaluate the effects of computer-based video instruction (CBVI) on teaching grocery purchasing skills to students with moderate intellectual disability (ID). Four high school students with mild to moderate ID participated in the study. A multiple-probe design across students was used to examine the effects. Results…
Biomolecular computers with multiple restriction enzymes.
Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz
2017-01-01
The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann "bottleneck". Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro's group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases.
Animation of finite element models and results
NASA Technical Reports Server (NTRS)
Lipman, Robert R.
1992-01-01
This is not intended as a complete review of computer hardware and software that can be used for animation of finite element models and results, but is instead a demonstration of the benefits of visualization using selected hardware and software. The role of raw computational power, graphics speed, and the use of videotape are discussed.
Computer hardware for radiologists: Part I
Indrajit, IK; Alam, A
2010-01-01
Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration. PMID:21042437
Computational adaptive optics for broadband interferometric tomography of tissues and cells
NASA Astrophysics Data System (ADS)
Adie, Steven G.; Mulligan, Jeffrey A.
2016-03-01
Adaptive optics (AO) can shape aberrated optical wavefronts to physically restore the constructive interference needed for high-resolution imaging. With access to the complex optical field, however, many functions of optical hardware can be achieved computationally, including focusing and the compensation of optical aberrations to restore the constructive interference required for diffraction-limited imaging performance. Holography, which employs interferometric detection of the complex optical field, was developed based on this connection between hardware and computational image formation, although this link has only recently been exploited for 3D tomographic imaging in scattering biological tissues. This talk will present the underlying imaging science behind computational image formation with optical coherence tomography (OCT) -- a beam-scanned version of broadband digital holography. Analogous to hardware AO (HAO), we demonstrate computational adaptive optics (CAO) and optimization of the computed pupil correction in 'sensorless mode' (Zernike polynomial corrections with feedback from image metrics) or with the use of 'guide-stars' in the sample. We discuss the concept of an 'isotomic volume' as the volumetric extension of the 'isoplanatic patch' introduced in astronomical AO. Recent CAO results and ongoing work is highlighted to point to the potential biomedical impact of computed broadband interferometric tomography. We also discuss the advantages and disadvantages of HAO vs. CAO for the effective shaping of optical wavefronts, and highlight opportunities for hybrid approaches that synergistically combine the unique advantages of hardware and computational methods for rapid volumetric tomography with cellular resolution.
Discovery & Interaction in Astro 101 Laboratory Experiments
NASA Astrophysics Data System (ADS)
Maloney, Frank Patrick; Maurone, Philip; DeWarf, Laurence E.
2016-01-01
The availability of low-cost, high-performance computing hardware and software has transformed the manner by which astronomical concepts can be re-discovered and explored in a laboratory that accompanies an astronomy course for arts students. We report on a strategy, begun in 1992, for allowing each student to understand fundamental scientific principles by interactively confronting astronomical and physical phenomena, through direct observation and by computer simulation. These experiments have evolved as :a) the quality and speed of the hardware has greatly increasedb) the corresponding hardware costs have decreasedc) the students have become computer and Internet literated) the importance of computationally and scientifically literate arts graduates in the workplace has increased.We present the current suite of laboratory experiments, and describe the nature, procedures, and goals in this two-semester laboratory for liberal arts majors at the Astro 101 university level.
Vendors' Summit '88: A Special Report on the Hardware Industry.
ERIC Educational Resources Information Center
Goodspeed, Jonathan
1988-01-01
Presents report of the Hardware Vendors/Educators Forum, which was convened to discuss microcomputer hardware in elementary and secondary schools. Representatives from Commodore, IBM, Tandy/Radio Shack, and Apple Computer, addressed topics including sales and service, integrating technology into the curriculum, college versus secondary level…
Furniture for a Technology-Infused School.
ERIC Educational Resources Information Center
Fickes, Michael
1998-01-01
Discusses how one New Mexico school district weighed the choices in selecting and purchasing computer furniture for its classrooms. The purchasing process is described, as well as the types of, and reasons for, the furniture bought. (GR)
48 CFR 1913.505-2 - Board order forms in lieu of Optional and Standard Forms.
Code of Federal Regulations, 2011 CFR
2011-10-01
... BROADCASTING BOARD OF GOVERNORS CONTRACTING METHODS AND CONTRACT TYPES SMALL PURCHASES AND OTHER SIMPLIFIED...-case basis, in order to accommodate computer-generated purchase order forms. Exception approval for...
48 CFR 1913.505-2 - Board order forms in lieu of Optional and Standard Forms.
Code of Federal Regulations, 2010 CFR
2010-10-01
... BROADCASTING BOARD OF GOVERNORS CONTRACTING METHODS AND CONTRACT TYPES SMALL PURCHASES AND OTHER SIMPLIFIED...-case basis, in order to accommodate computer-generated purchase order forms. Exception approval for...
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Human-machine interface hardware: The next decade
NASA Technical Reports Server (NTRS)
Marcus, Elizabeth A.
1991-01-01
In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.
CE-ACCE: The Cloud Enabled Advanced sCience Compute Environment
NASA Astrophysics Data System (ADS)
Cinquini, L.; Freeborn, D. J.; Hardman, S. H.; Wong, C.
2017-12-01
Traditionally, Earth Science data from NASA remote sensing instruments has been processed by building custom data processing pipelines (often based on a common workflow engine or framework) which are typically deployed and run on an internal cluster of computing resources. This approach has some intrinsic limitations: it requires each mission to develop and deploy a custom software package on top of the adopted framework; it makes use of dedicated hardware, network and storage resources, which must be specifically purchased, maintained and re-purposed at mission completion; and computing services cannot be scaled on demand beyond the capability of the available servers.More recently, the rise of Cloud computing, coupled with other advances in containerization technology (most prominently, Docker) and micro-services architecture, has enabled a new paradigm, whereby space mission data can be processed through standard system architectures, which can be seamlessly deployed and scaled on demand on either on-premise clusters, or commercial Cloud providers. In this talk, we will present one such architecture named CE-ACCE ("Cloud Enabled Advanced sCience Compute Environment"), which we have been developing at the NASA Jet Propulsion Laboratory over the past year. CE-ACCE is based on the Apache OODT ("Object Oriented Data Technology") suite of services for full data lifecycle management, which are turned into a composable array of Docker images, and complemented by a plug-in model for mission-specific customization. We have applied this infrastructure to both flying and upcoming NASA missions, such as ECOSTRESS and SMAP, and demonstrated deployment on the Amazon Cloud, either using simple EC2 instances, or advanced AWS services such as Amazon Lambda and ECS (EC2 Container Services).
Spin-based quantum computation in multielectron quantum dots
NASA Astrophysics Data System (ADS)
Hu, Xuedong; Das Sarma, S.
2001-10-01
In a quantum computer the hardware and software are intrinsically connected because the quantum Hamiltonian (or more precisely its time development) is the code that runs the computer. We demonstrate this subtle and crucial relationship by considering the example of electron-spin-based solid-state quantum computer in semiconductor quantum dots. We show that multielectron quantum dots with one valence electron in the outermost shell do not behave simply as an effective single-spin system unless special conditions are satisfied. Our work compellingly demonstrates that a delicate synergy between theory and experiment (between software and hardware) is essential for constructing a quantum computer.
Examining the architecture of cellular computing through a comparative study with a computer.
Wang, Degeng; Gribskov, Michael
2005-06-22
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software-hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's "hardware" equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the "bandwidth" of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.
The K-12 Hardware Industry: A Heated Race that Shows No Sign of Letting Up.
ERIC Educational Resources Information Center
McCarthy, Robert
1989-01-01
This overview of the computer industry vendors that supply microcomputer hardware to educators for use in kindergarten through high school focuses on Apple, Tandy, Commodore, and IBM. The use of MS-DOS versus the operating system used in Apple computers is discussed, and pricing and service issues are raised. (LRW)
Three-Dimensional Nanobiocomputing Architectures With Neuronal Hypercells
2007-06-01
Neumann architectures, and CMOS fabrication. Novel solutions of massive parallel distributed computing and processing (pipelined due to systolic... and processing platforms utilizing molecular hardware within an enabling organization and architecture. The design technology is based on utilizing a...Microsystems and Nanotechnologies investigated a novel 3D3 (Hardware Software Nanotechnology) technology to design super-high performance computing
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Computer hardware description languages - A tutorial
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1979-01-01
The paper introduces hardware description languages (HDL) as useful tools for hardware design and documentation. The capabilities and limitations of HDLs are discussed along with the guidelines needed in selecting an appropriate HDL. The directions for future work are provided and attention is given to the implementation of HDLs in microcomputers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... combination of electronic hardware and software integrated in a variety of forms (firmware, programmable... electronic hardware and computer software integrated in a variety of forms (firmware, programmable software...
Code of Federal Regulations, 2011 CFR
2011-07-01
... combination of electronic hardware and software integrated in a variety of forms (firmware, programmable... electronic hardware and computer software integrated in a variety of forms (firmware, programmable software...
Code of Federal Regulations, 2014 CFR
2014-07-01
... combination of electronic hardware and software integrated in a variety of forms (firmware, programmable... electronic hardware and computer software integrated in a variety of forms (firmware, programmable software...
Code of Federal Regulations, 2012 CFR
2012-07-01
... combination of electronic hardware and software integrated in a variety of forms (firmware, programmable... electronic hardware and computer software integrated in a variety of forms (firmware, programmable software...
Modern Computational Techniques for the HMMER Sequence Analysis
2013-01-01
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944
Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft
NASA Technical Reports Server (NTRS)
Kleiner, Steven C.
1996-01-01
Work is progressing as outlined in the proposal for this contract. A working planning and scheduling system has been documented and packaged and made available to the WIRE Small Explorer group at JPL, the FUSE group at JHU, the NASA/GSFC Laboratory for Astronomy and Solar Physics and the Advanced Planning and Scheduling Branch at STScI. The package is running successfully on the WIRE computer system. It is expected that the WIRE will reuse significant portions of the SWAS code in its system. This scheduling system itself was tested successfully against the spacecraft hardware in December 1995. A fully automatic scheduling module has been developed and is being added to the toolkit. In order to maximize reuse, the code is being reorganized during the current build into object-oriented class libraries. A paper describing the toolkit has been written and is included in the software distribution. We have experienced interference between the export and production versions of the toolkit. We will be requesting permission to reprogram funds in order to purchase a standalone PC onto which to offload the export version.
Intraoral Scanner Technologies: A Review to Make a Successful Impression
Richert, Raphaël; Goujat, Alexis; Venet, Laurent; Viguie, Gilbert; Viennot, Stéphane; Robinson, Philip; Farges, Jean-Christophe; Fages, Michel
2017-01-01
To overcome difficulties associated with conventional techniques, impressions with IOS (intraoral scanner) and CAD/CAM (computer-aided design and manufacturing) technologies were developed for dental practice. The last decade has seen an increasing number of optical IOS devices, and these are based on different technologies; the choice of which may impact on clinical use. To allow informed choice before purchasing or renewing an IOS, this article summarizes first the technologies currently used (light projection, distance object determination, and reconstruction). In the second section, the clinical considerations of each strategy such as handling, learning curve, powdering, scanning paths, tracking, and mesh quality are discussed. The last section is dedicated to the accuracy of files and of the intermaxillary relationship registered with IOS as the rendering of files in the graphical user interface is often misleading. This overview leads to the conclusion that the current IOS is adapted for a common practice, although differences exist between the technologies employed. An important aspect highlighted in this review is the reduction in the volume of hardware which has led to an increase in the importance of software-based technologies. PMID:29065652
A New Multiplier Structure for Digital Signal Processing
1980-01-01
8217 ’) ’ 1 ’ ’ ":"/’.4 -r ,’::33,!. ,4 .1,4 .. . ...;.r I ’I,’ C!’ vdI I"., 1" r;’O . ;AOO.. -, :. "’ rO0000’:’- 4 . 6.67 - 1733:::: .. 1...pipelined execution rate can he purchased at a small hardware cost. Example: p - 211 . 2048 , x = 1040, y = 352, then Z = <xy>p 1536 S + + Al:s = 1376; (j(s
A microcomputer system for clinical bacteriology: experience of 12 months' trial.
Courcol, R J; Roussel-Delvallez, M; Martin, G R
1982-01-01
A data processing system using microcomputers was developed in a hospital bacteriology laboratory processing more than 60 000 specimens yearly. The purchase price of the hardware was frs 200 000 (17 500 pounds) and the software was written by the authors. The system has been running since May 1980 without general breakdown. The present configuration allows the processing of specimens, enquiries, scientific and administrative tasks but multiprogramming and cumulative reports are not possible. PMID:7107962
Efficient architecture for spike sorting in reconfigurable hardware.
Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying
2013-11-01
This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
Inexact hardware for modelling weather & climate
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, Tim
2014-05-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
Reconfigurable Hardware Adapts to Changing Mission Demands
NASA Technical Reports Server (NTRS)
2003-01-01
A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.
Superconducting Optoelectronic Circuits for Neuromorphic Computing
NASA Astrophysics Data System (ADS)
Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo
2017-03-01
Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
James, Conrad D.; Aimone, James B.; Miner, Nadine E.; ...
2017-01-04
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Aimone, James B.; Miner, Nadine E.
In this study, biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here in this research, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classesmore » such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. Additionally, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.« less
Hardware synthesis from DDL. [Digital Design Language for computer aided design and test of LSI
NASA Technical Reports Server (NTRS)
Shah, A. M.; Shiva, S. G.
1981-01-01
The details of the digital systems can be conveniently input into the design automation system by means of Hardware Description Languages (HDL). The Computer Aided Design and Test (CADAT) system at NASA MSFC is used for the LSI design. The Digital Design Language (DDL) has been selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. This paper addresses problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system.
Biomolecular computers with multiple restriction enzymes
Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz
2017-01-01
Abstract The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann “bottleneck”. Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro’s group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases. PMID:29064510
Criteria-based evaluation of group 3 level memory telefacsimile equipment for interlibrary loan.
Bennett, V M; Wood, M S; Malcom, D L
1990-01-01
The Interlibrary Loan, Document Delivery, and Union List Task Force of the Health Sciences Libraries Consortium (HSLC)--with nineteen libraries located in Philadelphia, Pittsburgh, and Hershey, Pennsylvania, and Delaware--accepted the charge of evaluating and recommending for purchase telefacsimile hardware to further interlibrary loan among HSLC members. To allow a thorough and scientific evaluation of group 3 level telefacsimile equipment, the task force identified ninety-six hardware features, which were grouped into nine broad criteria. These features formed the basis of a weighted analysis that identified three final candidates, with one model recommended to the HSLC board. This article details each of the criteria and discusses features in terms of library applications. The evaluation grid developed in the weighted analysis process should aid librarians charged with the selection of level 3 telefacsimile equipment. PMID:2328361
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)
2001-01-01
Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.
ERIC Educational Resources Information Center
Bessey, Barbara L.; And Others
Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…
Automating quantum experiment control
NASA Astrophysics Data System (ADS)
Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.
2017-03-01
The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.
OER Approach for Specific Student Groups in Hardware-Based Courses
ERIC Educational Resources Information Center
Ackovska, Nevena; Ristov, Sasko
2014-01-01
Hardware-based courses in computer science studies require much effort from both students and teachers. The most important part of students' learning is attending in person and actively working on laboratory exercises on hardware equipment. This paper deals with a specific group of students, those who are marginalized by not being able to…
Cooley, Philip C.; Turner, Charles F.; O'Reilly, James M.; Allen, Danny R.; Hamill, David N.; Paddock, Richard E.
2011-01-01
This article reviews a multimedia application in the area of survey measurement research: adding audio capabilities to a computer-assisted interviewing system. Hardware and software issues are discussed, and potential hardware devices that operate from DOS platforms are reviewed. Three types of hardware devices are considered: PCMCIA devices, parallel port attachments, and laptops with built-in sound. PMID:22096271
The Sociotechnical Boundaries of Hardware and Software: A Humpty Dumpty History
ERIC Educational Resources Information Center
Jesiek, Brent K.
2006-01-01
This article traces the historical development of the boundaries around computer software and hardware. On one hand, the author documents ongoing discussions about the technical equivalence of hardware and software. On the other hand, he accounts for the stubborn persistence of these terms as markers for two distinct spheres of technology,…
Store-operate-coherence-on-value
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Heidelberger, Philip; Kumar, Sameer
A system, method and computer program product for performing various store-operate instructions in a parallel computing environment that includes a plurality of processors and at least one cache memory device. A queue in the system receives, from a processor, a store-operate instruction that specifies under which condition a cache coherence operation is to be invoked. A hardware unit in the system runs the received store-operate instruction. The hardware unit evaluates whether a result of the running the received store-operate instruction satisfies the condition. The hardware unit invokes a cache coherence operation on a cache memory address associated with the receivedmore » store-operate instruction if the result satisfies the condition. Otherwise, the hardware unit does not invoke the cache coherence operation on the cache memory device.« less
Reconfigurable vision system for real-time applications
NASA Astrophysics Data System (ADS)
Torres-Huitzil, Cesar; Arias-Estrada, Miguel
2002-03-01
Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.
Addressing Failures in Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snir, Marc; Wisniewski, Robert; Abraham, Jacob
2014-01-01
We present here a report produced by a workshop on Addressing failures in exascale computing' held in Park City, Utah, 4-11 August 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system, discuss existing knowledge on resilience across the various hardware and software layers of an exascale system, and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, andmore » academia, and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.« less
Waterlander, Wilma Elzeline; Rayner, Mike; Scarborough, Peter
2017-01-01
Background The majority of food in the United Kingdom is purchased in supermarkets, and therefore, supermarket interventions provide an opportunity to improve diets. Randomized controlled trials are costly, time-consuming, and difficult to conduct in real stores. Alternative approaches of assessing the impact of supermarket interventions on food purchases are needed, especially with respect to assessing differential impacts on population subgroups. Objective The aim of this study was to assess the feasibility of using the United Kingdom Virtual Supermarket (UKVS), a three-dimensional (3D) computer simulation of a supermarket, to measure food purchasing behavior across income groups. Methods Participants (primary household shoppers in the United Kingdom with computer access) were asked to conduct two shopping tasks using the UKVS and complete questionnaires on demographics, food purchasing habits, and feedback on the UKVS software. Data on recruitment method and rate, completion of study procedure, purchases, and feedback on usability were collected to inform future trial protocols. Results A total of 98 participants were recruited, and 46 (47%) fully completed the study procedure. Low-income participants were less likely to complete the study (P=.02). Most participants found the UKVS easy to use (38/46, 83%) and reported that UKVS purchases resembled their usual purchases (41/46, 89%). Conclusions The UKVS is likely to be a useful tool to examine the effects of nutrition interventions using randomized controlled designs. Feedback was positive from participants who completed the study and did not differ by income group. However, retention was low and needs to be addressed in future studies. This study provides purchasing data to establish sample size requirements for full trials using the UKVS. PMID:28993301
Electronics Environmental Benefits Calculator
The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase, use and disposal of electronics.The EEBC estimates the environmental and economic benefits of: Purchasing Electronic Product Environmental Assessment Tool (EPEAT)-registered products; Enabling power management features on computers and monitors above default percentages; Extending the life of equipment beyond baseline values; Reusing computers, monitors and cell phones; and Recycling computers, monitors, cell phones and loads of mixed electronic products.The EEBC may be downloaded as a Microsoft Excel spreadsheet.See https://www.federalelectronicschallenge.net/resources/bencalc.htm for more details.
Demonstration of a small programmable quantum computer with atomic qubits.
Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C
2016-08-04
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
Demonstration of a small programmable quantum computer with atomic qubits
NASA Astrophysics Data System (ADS)
Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.
2016-08-01
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
Real-time range generation for ladar hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Olson, Eric M.; Coker, Charles F.
1996-05-01
Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1978-01-01
Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.
Department of Defense Computer Technology. A Report to Congress.
1983-08-01
system and evolves his employment tactics. (8) Lack of adequate competition. Conclusions Based on both software and hardware arguments, it is...environments, Services should be encouraged to use either common-commercial, ruggedized-commercial or "off-the-shelf" militarized computers based upon...the performance requirements of the specific application. Full consideration should be given to Ada- based systems where there is no strict hardware
Stellar Inertial Navigation Workstation
NASA Technical Reports Server (NTRS)
Johnson, W.; Johnson, B.; Swaminathan, N.
1989-01-01
Software and hardware assembled to support specific engineering activities. Stellar Inertial Navigation Workstation (SINW) is integrated computer workstation providing systems and engineering support functions for Space Shuttle guidance and navigation-system logistics, repair, and procurement activities. Consists of personal-computer hardware, packaged software, and custom software integrated together into user-friendly, menu-driven system. Designed to operate on IBM PC XT. Applied in business and industry to develop similar workstations.
The Triangle: a Multiprocessor Architecture for Fast Curve and Surface Generation.
1987-08-01
design , curves and surfaces, graphics hardware. 20...curves, B-splines, computer-aided geometric design ; curves and sur- faces, graphics hardware. (k 12). -/ .... This work was supported in part by the...34 Electronic Design , October 30, 1986. 21. M. A. Penna and R. R. Patterson, Projective Geometry and its Applications to Computer Graphics , Prentice-Hall, Englewood Cliffs, N.J., 1985. 70,e, 41100vr -~ ~ - -- --
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID
2010-09-21
The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.
MO-AB-204-01: IHE RO Overview [Health Care
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, S.
You’ve experienced the frustration: vendor A’s device claims to work with vendor B’s device, but the practice doesn’t match the promise. Getting devices working together is the hidden art that Radiology and Radiation Oncology staff have to master. To assist with that difficult process, the Integrating the Healthcare Enterprise (IHE) effort was established in 1998, with the coordination of the Radiological Society of North America. Integrating the Healthcare Enterprise (IHE) is a consortium of healthcare professionals and industry partners focused on improving the way computer systems interconnect and exchange information. This is done by coordinating the use of published standardsmore » like DICOM and HL7. Several clinical and operational IHE domains exist in the healthcare arena, including Radiology and Radiation Oncology. The ASTRO-sponsored IHE Radiation Oncology (IHE-RO) domain focuses on radiation oncology specific information exchange. This session will explore the IHE Radiology and IHE RO process for; IHE solicitation process for new profiles. Improving the way computer systems interconnect and exchange information in the healthcare enterprise Supporting interconnectivity descriptions and proof of adherence by vendors Testing and assuring the vendor solutions to connectivity problems. Including IHE profiles in RFPs for future software and hardware purchases. Learning Objectives: Understand IHE role in improving interoperability in health care. Understand process of profile development and implantation. Understand how vendors prove adherence to IHE RO profiles. S. Hadley, ASTRO Supported Activity.« less
MO-AB-204-02: IHE RAD [Health care
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seibert, J.
You’ve experienced the frustration: vendor A’s device claims to work with vendor B’s device, but the practice doesn’t match the promise. Getting devices working together is the hidden art that Radiology and Radiation Oncology staff have to master. To assist with that difficult process, the Integrating the Healthcare Enterprise (IHE) effort was established in 1998, with the coordination of the Radiological Society of North America. Integrating the Healthcare Enterprise (IHE) is a consortium of healthcare professionals and industry partners focused on improving the way computer systems interconnect and exchange information. This is done by coordinating the use of published standardsmore » like DICOM and HL7. Several clinical and operational IHE domains exist in the healthcare arena, including Radiology and Radiation Oncology. The ASTRO-sponsored IHE Radiation Oncology (IHE-RO) domain focuses on radiation oncology specific information exchange. This session will explore the IHE Radiology and IHE RO process for; IHE solicitation process for new profiles. Improving the way computer systems interconnect and exchange information in the healthcare enterprise Supporting interconnectivity descriptions and proof of adherence by vendors Testing and assuring the vendor solutions to connectivity problems. Including IHE profiles in RFPs for future software and hardware purchases. Learning Objectives: Understand IHE role in improving interoperability in health care. Understand process of profile development and implantation. Understand how vendors prove adherence to IHE RO profiles. S. Hadley, ASTRO Supported Activity.« less
MO-AB-204-04: Connectathons and Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch, W.
You’ve experienced the frustration: vendor A’s device claims to work with vendor B’s device, but the practice doesn’t match the promise. Getting devices working together is the hidden art that Radiology and Radiation Oncology staff have to master. To assist with that difficult process, the Integrating the Healthcare Enterprise (IHE) effort was established in 1998, with the coordination of the Radiological Society of North America. Integrating the Healthcare Enterprise (IHE) is a consortium of healthcare professionals and industry partners focused on improving the way computer systems interconnect and exchange information. This is done by coordinating the use of published standardsmore » like DICOM and HL7. Several clinical and operational IHE domains exist in the healthcare arena, including Radiology and Radiation Oncology. The ASTRO-sponsored IHE Radiation Oncology (IHE-RO) domain focuses on radiation oncology specific information exchange. This session will explore the IHE Radiology and IHE RO process for; IHE solicitation process for new profiles. Improving the way computer systems interconnect and exchange information in the healthcare enterprise Supporting interconnectivity descriptions and proof of adherence by vendors Testing and assuring the vendor solutions to connectivity problems. Including IHE profiles in RFPs for future software and hardware purchases. Learning Objectives: Understand IHE role in improving interoperability in health care. Understand process of profile development and implantation. Understand how vendors prove adherence to IHE RO profiles. S. Hadley, ASTRO Supported Activity.« less
MO-AB-204-03: Profile Development and IHE Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pauer, C.
You’ve experienced the frustration: vendor A’s device claims to work with vendor B’s device, but the practice doesn’t match the promise. Getting devices working together is the hidden art that Radiology and Radiation Oncology staff have to master. To assist with that difficult process, the Integrating the Healthcare Enterprise (IHE) effort was established in 1998, with the coordination of the Radiological Society of North America. Integrating the Healthcare Enterprise (IHE) is a consortium of healthcare professionals and industry partners focused on improving the way computer systems interconnect and exchange information. This is done by coordinating the use of published standardsmore » like DICOM and HL7. Several clinical and operational IHE domains exist in the healthcare arena, including Radiology and Radiation Oncology. The ASTRO-sponsored IHE Radiation Oncology (IHE-RO) domain focuses on radiation oncology specific information exchange. This session will explore the IHE Radiology and IHE RO process for; IHE solicitation process for new profiles. Improving the way computer systems interconnect and exchange information in the healthcare enterprise Supporting interconnectivity descriptions and proof of adherence by vendors Testing and assuring the vendor solutions to connectivity problems. Including IHE profiles in RFPs for future software and hardware purchases. Learning Objectives: Understand IHE role in improving interoperability in health care. Understand process of profile development and implantation. Understand how vendors prove adherence to IHE RO profiles. S. Hadley, ASTRO Supported Activity.« less
A hardware implementation of the discrete Pascal transform for image processing
NASA Astrophysics Data System (ADS)
Goodman, Thomas J.; Aburdene, Maurice F.
2006-02-01
The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
Software environment for implementing engineering applications on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K. A.; Schiff, S.
1990-01-01
In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Bickford, Mark
1991-01-01
The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
NASA Technical Reports Server (NTRS)
Logan, Cory; Maida, James; Goldsby, Michael; Clark, Jim; Wu, Liew; Prenger, Henk
1993-01-01
The Space Station Freedom (SSF) Data Management System (DMS) consists of distributed hardware and software which monitor and control the many onboard systems. Virtual environment and off-the-shelf computer technologies can be used at critical points in project development to aid in objectives and requirements development. Geometric models (images) coupled with off-the-shelf hardware and software technologies were used in The Space Station Mockup and Trainer Facility (SSMTF) Crew Operational Assessment Project. Rapid prototyping is shown to be a valuable tool for operational procedure and system hardware and software requirements development. The project objectives, hardware and software technologies used, data gained, current activities, future development and training objectives shall be discussed. The importance of defining prototyping objectives and staying focused while maintaining schedules are discussed along with project pitfalls.
Computer hardware for radiologists: Part 2
Indrajit, IK; Alam, A
2010-01-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895
Computer hardware for radiologists: Part 2.
Indrajit, Ik; Alam, A
2010-11-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Science Projects | Akron-Summit County Public Library
Resources Suggest a Purchase Hours & Locations Main Library Mobile Services Ellet Fairlawn-Bath Training Print/Scan/Fax/Copy Assistive Services Borrower Services Library Computers Meeting Rooms Mobile ; Notable Student Resources Suggest a Purchase Hours & Locations Main Library Mobile Services Ellet
Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)
2002-01-01
Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.
NASA Astrophysics Data System (ADS)
Aditya, K.; Biswadeep, G.; Kedar, S.; Sundar, S.
2017-11-01
Human computer communication has growing demand recent days. The new generation of autonomous technology aspires to give computer interfaces emotional states that relate and consider user as well as system environment considerations. In the existing computational model is based an artificial intelligent and externally by multi-modal expression augmented with semi human characteristics. But the main problem with is multi-model expression is that the hardware control given to the Artificial Intelligence (AI) is very limited. So, in our project we are trying to give the Artificial Intelligence (AI) more control on the hardware. There are two main parts such as Speech to Text (STT) and Text to Speech (TTS) engines are used accomplish the requirement. In this work, we are using a raspberry pi 3, a speaker and a mic as hardware and for the programing part, we are using python scripting.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters
Torres-Huitzil, Cesar
2013-01-01
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456
An open-hardware platform for optogenetics and photobiology
Gerhardt, Karl P.; Olson, Evan J.; Castillo-Hair, Sebastian M.; Hartsough, Lucas A.; Landry, Brian P.; Ekness, Felix; Yokoo, Rayka; Gomez, Eric J.; Ramakrishnan, Prabha; Suh, Junghae; Savage, David F.; Tabor, Jeffrey J.
2016-01-01
In optogenetics, researchers use light and genetically encoded photoreceptors to control biological processes with unmatched precision. However, outside of neuroscience, the impact of optogenetics has been limited by a lack of user-friendly, flexible, accessible hardware. Here, we engineer the Light Plate Apparatus (LPA), a device that can deliver two independent 310 to 1550 nm light signals to each well of a 24-well plate with intensity control over three orders of magnitude and millisecond resolution. Signals are programmed using an intuitive web tool named Iris. All components can be purchased for under $400 and the device can be assembled and calibrated by a non-expert in one day. We use the LPA to precisely control gene expression from blue, green, and red light responsive optogenetic tools in bacteria, yeast, and mammalian cells and simplify the entrainment of cyanobacterial circadian rhythm. The LPA dramatically reduces the entry barrier to optogenetics and photobiology experiments. PMID:27805047
An open-hardware platform for optogenetics and photobiology
Gerhardt, Karl P.; Olson, Evan J.; Castillo-Hair, Sebastian M.; ...
2016-11-02
In optogenetics, researchers use light and genetically encoded photoreceptors to control biological processes with unmatched precision. However, outside of neuroscience, the impact of optogenetics has been limited by a lack of user-friendly, flexible, accessible hardware. Here, we engineer the Light Plate Apparatus (LPA), a device that can deliver two independent 310 to 1550 nm light signals to each well of a 24-well plate with intensity control over three orders of magnitude and millisecond resolution. Signals are programmed using an intuitive web tool named Iris. All components can be purchased for under $400 and the device can be assembled and calibratedmore » by a non-expert in one day. We use the LPA to precisely control gene expression from blue, green, and red light responsive optogenetic tools in bacteria, yeast, and mammalian cells and simplify the entrainment of cyanobacterial circadian rhythm. Lastly, the LPA dramatically reduces the entry barrier to optogenetics and photobiology experiments.« less
An open-hardware platform for optogenetics and photobiology.
Gerhardt, Karl P; Olson, Evan J; Castillo-Hair, Sebastian M; Hartsough, Lucas A; Landry, Brian P; Ekness, Felix; Yokoo, Rayka; Gomez, Eric J; Ramakrishnan, Prabha; Suh, Junghae; Savage, David F; Tabor, Jeffrey J
2016-11-02
In optogenetics, researchers use light and genetically encoded photoreceptors to control biological processes with unmatched precision. However, outside of neuroscience, the impact of optogenetics has been limited by a lack of user-friendly, flexible, accessible hardware. Here, we engineer the Light Plate Apparatus (LPA), a device that can deliver two independent 310 to 1550 nm light signals to each well of a 24-well plate with intensity control over three orders of magnitude and millisecond resolution. Signals are programmed using an intuitive web tool named Iris. All components can be purchased for under $400 and the device can be assembled and calibrated by a non-expert in one day. We use the LPA to precisely control gene expression from blue, green, and red light responsive optogenetic tools in bacteria, yeast, and mammalian cells and simplify the entrainment of cyanobacterial circadian rhythm. The LPA dramatically reduces the entry barrier to optogenetics and photobiology experiments.
An open-hardware platform for optogenetics and photobiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhardt, Karl P.; Olson, Evan J.; Castillo-Hair, Sebastian M.
In optogenetics, researchers use light and genetically encoded photoreceptors to control biological processes with unmatched precision. However, outside of neuroscience, the impact of optogenetics has been limited by a lack of user-friendly, flexible, accessible hardware. Here, we engineer the Light Plate Apparatus (LPA), a device that can deliver two independent 310 to 1550 nm light signals to each well of a 24-well plate with intensity control over three orders of magnitude and millisecond resolution. Signals are programmed using an intuitive web tool named Iris. All components can be purchased for under $400 and the device can be assembled and calibratedmore » by a non-expert in one day. We use the LPA to precisely control gene expression from blue, green, and red light responsive optogenetic tools in bacteria, yeast, and mammalian cells and simplify the entrainment of cyanobacterial circadian rhythm. Lastly, the LPA dramatically reduces the entry barrier to optogenetics and photobiology experiments.« less
2016-09-01
and network. The computing and network hardware are identified and include routers, servers, firewalls, laptops , backup hard drives, smart phones...deployable hardware units will be necessary. This includes the use of ruggedized laptops and desktop computers , a projector system, communications system...ENGINEERING STUDY AND CONCEPT DEVELOPMENT FOR A HUMANITARIAN AID AND DISASTER RELIEF OPERATIONS MANAGEMENT PLATFORM by Julie A. Reed September
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
Integrating Reconfigurable Hardware-Based Grid for High Performance Computing
Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos
2015-01-01
FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241
Neuromorphic Computing for Very Large Test and Evaluation Data Analysis
2014-05-01
analysis and utilization of newly available hardware- based artificial neural network chips. These two aspects of the program are complementary. The...neuromorphic architectures research focused on long term disruptive technologies with high risk but revolutionary potential. The hardware- based neural...today. Overall, hardware- based neural processing research allows us to study the fundamental system and architectural issues relevant for employing
Hardware survey for the avionics test bed
NASA Technical Reports Server (NTRS)
Cobb, J. M.
1981-01-01
A survey of maor hardware items that could possibly be used in the development of an avionics test bed for space shuttle attached or autonomous large space structures was conducted in NASA Johnson Space Center building 16. The results of the survey are organized to show the hardware by laboratory usage. Computer systems in each laboratory are described in some detail.
Operational Suitability Guide. Volume 2. Templates
1990-05-01
Intended mission, and the required technical and operational characteristics. The mission must be adequately defined and key hardware and software ...operational availability. With the use of fault-tolerant computer hardware and software , the system R&M will significantly improve end-to-end...should Include both hardware and software elements, as appropriate. Unique characteristics or unique support concepts should be Identified if they result
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Cloud Computing for radiologists
Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit
2012-01-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560
ERIC Educational Resources Information Center
Grant, Deborah R.
1999-01-01
Examines the factors involved in purchasing school furnishings that will help ensure its long-time use, safety, and ability to resist abuse. Cost and safety factors discussed include resisting trendy colors to reduce cost in furniture matching, managing computer and office wiring for safety, considering ergonomics in the purchasing decision, and…
Laboratory Computing Resource Center
Systems Computing and Data Resources Purchasing Resources Future Plans For Users Getting Started Using LCRC Software Best Practices and Policies Getting Help Support Laboratory Computing Resource Center Laboratory Computing Resource Center Latest Announcements See All April 27, 2018, Announcements, John Low
Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Computer Technology: State of the Art.
ERIC Educational Resources Information Center
Withington, Frederic G.
1981-01-01
Describes the nature of modern general-purpose computer systems, including hardware, semiconductor electronics, microprocessors, computer architecture, input output technology, and system control programs. Seven suggested readings are cited. (FM)
Computer Literacy for Teachers.
ERIC Educational Resources Information Center
Sarapin, Marvin I.; Post, Paul E.
Basic concepts of computer literacy are discussed as they relate to industrial arts/technology education. Computer hardware development is briefly examined, and major software categories are defined, including database management, computer graphics, spreadsheet programs, telecommunications and networking, word processing, and computer assisted and…
Biomorphic Multi-Agent Architecture for Persistent Computing
NASA Technical Reports Server (NTRS)
Lodding, Kenneth N.; Brewster, Paul
2009-01-01
A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.
More About Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
Edmonds, Iarina
2007-01-01
A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.
Implementation of Multispectral Image Classification on a Remote Adaptive Computer
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna
1999-01-01
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
76 FR 21393 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-15
... startup cost components or annual operation, maintenance, and purchase of service components. You should describe the methods you use to estimate major cost factors, including system and technology acquisition.... Capital and startup costs include, among other items, computers and software you purchase to prepare for...
76 FR 25367 - Agency Information Collection Activities: Proposed Collection, Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... startup cost components or annual operation, maintenance, and purchase of service components. You should describe the methods you use to estimate major cost factors, including system and technology acquisition.... Capital and startup costs include, among other items, computers and software you purchase to prepare for...
Cloud Computing and Your Library
ERIC Educational Resources Information Center
Mitchell, Erik T.
2010-01-01
One of the first big shifts in how libraries manage resources was the move from print-journal purchasing models to database-subscription and electronic-journal purchasing models. Libraries found that this transition helped them scale their resources and provide better service just by thinking a bit differently about their services. Likewise,…
48 CFR 49.112-1 - Partial payments.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., purchased parts, supplies, and direct labor; (4) 90 percent of other allowable costs (including settlement... terminated portion of the contract; and (2) The amounts of all credits arising from the purchase, retention... interest shall be computed at the rate established by the Secretary of the Treasury under 50 U.S.C. App...
Army Needs to Identify Government Purchase Card High-Risk Transactions
2012-01-20
Purchase Card Program Data Mining Process Needs Improvement 11...Mining Process Needs Improvement The 17 transactions that were noncompliant occurred because cardholders ignored the GPC business rules so the...Scope and Methodology 16 Use of Computer- Processed Data 16 Use of Technical Assistance 17 Prior Coverage
Automated data acquisition technology development:Automated modeling and control development
NASA Technical Reports Server (NTRS)
Romine, Peter L.
1995-01-01
This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.
Restricted Authentication and Encryption for Cyber-physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkpatrick, Michael S; Bertino, Elisa; Sheldon, Frederick T
2009-01-01
Cyber-physical systems (CPS) are characterized by the close linkage of computational resources and physical devices. These systems can be deployed in a number of critical infrastructure settings. As a result, the security requirements of CPS are different than traditional computing architectures. For example, critical functions must be identified and isolated from interference by other functions. Similarly, lightweight schemes may be required, as CPS can include devices with limited computing power. One approach that offers promise for CPS security is the use of lightweight, hardware-based authentication. Specifically, we consider the use of Physically Unclonable Functions (PUFs) to bind an access requestmore » to specific hardware with device-specific keys. PUFs are implemented in hardware, such as SRAM, and can be used to uniquely identify the device. This technology could be used in CPS to ensure location-based access control and encryption, both of which would be desirable for CPS implementations.« less
NASA Astrophysics Data System (ADS)
Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.
1989-12-01
The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.
Towards composition of verified hardware devices
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, G. C.
1991-01-01
Computers are being used where no affordable level of testing is adequate. Safety and life critical systems must find a replacement for exhaustive testing to guarantee their correctness. Through a mathematical proof, hardware verification research has focused on device verification and has largely ignored system composition verification. To address these deficiencies, we examine how the current hardware verification methodology can be extended to verify complete systems.
Development of IS2100: An Information Systems Laboratory.
1985-03-01
systems for digital logic; hardware architecture; machine, assembly, and high order language programming; and application packages such as database... applications and limitations. They should be able to define, demonstrate and/or discuss how computers are used, how they do their work, how to use them, and...limitations. Hands on operation of the hardware and software provides experience that aids in future selection of hardware systems and applications
Computer Technology Directory.
ERIC Educational Resources Information Center
Exceptional Parent, 1990
1990-01-01
This directory lists approximately 300 commercial vendors that offer computer hardware, software, and communication aids for children with disabilities. The company listings indicate computer compatibility and specific disabilities served by their products. (JDD)
Neely, Alice N.; Sittig, Dean F.
2002-01-01
Computer technology from the management of individual patient medical records to the tracking of epidemiologic trends has become an essential part of all aspects of modern medicine. Consequently, computers, including bedside components, point-of-care testing equipment, and handheld computer devices, are increasingly present in patients’ rooms. Recent articles have indicated that computer hardware, just as other medical equipment, may act as a reservoir for microorganisms and contribute to the transfer of pathogens to patients. This article presents basic microbiological concepts relative to infection, reviews the present literature concerning possible links between computer contamination and nosocomial colonizations and infections, discusses basic principles for the control of contamination, and provides guidelines for reducing the risk of transfer of microorganisms to susceptible patient populations. PMID:12223502
QCE: A Simulator for Quantum Computer Hardware
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; de Raedt, Hans
2003-09-01
The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms. QCE runs in a Windows 98/NT/2000/ME/XP environment. It can be used to validate designs of physically realizable quantum processors and as an interactive educational tool to learn about quantum computers and quantum algorithms. A detailed exposition is given of the implementation of the CNOT and the Toffoli gate, the quantum Fourier transform, Grover's database search algorithm, an order finding algorithm, Shor's algorithm, a three-input adder and a number partitioning algorithm. We also review the results of simulations of an NMR-like quantum computer.
Integrated Hardware and Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.
Support for Diagnosis of Custom Computer Hardware
NASA Technical Reports Server (NTRS)
Molock, Dwaine S.
2008-01-01
The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.
Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertoni, Colleen
Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less
A probability-based approach for assessment of roadway safety hardware.
DOT National Transportation Integrated Search
2017-03-14
This report presents a general probability-based approach for assessment of roadway safety hardware (RSH). It was achieved using a reliability : analysis method and computational techniques. With the development of high-fidelity finite element (FE) m...
Mizdrak, Anja; Waterlander, Wilma Elzeline; Rayner, Mike; Scarborough, Peter
2017-10-09
The majority of food in the United Kingdom is purchased in supermarkets, and therefore, supermarket interventions provide an opportunity to improve diets. Randomized controlled trials are costly, time-consuming, and difficult to conduct in real stores. Alternative approaches of assessing the impact of supermarket interventions on food purchases are needed, especially with respect to assessing differential impacts on population subgroups. The aim of this study was to assess the feasibility of using the United Kingdom Virtual Supermarket (UKVS), a three-dimensional (3D) computer simulation of a supermarket, to measure food purchasing behavior across income groups. Participants (primary household shoppers in the United Kingdom with computer access) were asked to conduct two shopping tasks using the UKVS and complete questionnaires on demographics, food purchasing habits, and feedback on the UKVS software. Data on recruitment method and rate, completion of study procedure, purchases, and feedback on usability were collected to inform future trial protocols. A total of 98 participants were recruited, and 46 (47%) fully completed the study procedure. Low-income participants were less likely to complete the study (P=.02). Most participants found the UKVS easy to use (38/46, 83%) and reported that UKVS purchases resembled their usual purchases (41/46, 89%). The UKVS is likely to be a useful tool to examine the effects of nutrition interventions using randomized controlled designs. Feedback was positive from participants who completed the study and did not differ by income group. However, retention was low and needs to be addressed in future studies. This study provides purchasing data to establish sample size requirements for full trials using the UKVS. ©Anja Mizdrak, Wilma Elzeline Waterlander, Mike Rayner, Peter Scarborough. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.10.2017.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
Quit and Smoking Reduction Rates in Vape Shop Consumers: A Prospective 12-Month Survey
Polosa, Riccardo; Caponnetto, Pasquale; Cibella, Fabio; Le-Houezec, Jacques
2015-01-01
Aims: Here, we present results from a prospective pilot study that was aimed at surveying changes in daily cigarette consumption in smokers making their first purchase at vape shops. Modifications in products purchase were also noted. Design: Participants were instructed how to charge, fill, activate and use their e-cigarettes (e-cigs). Participants were encouraged to use these products in the anticipation of reducing the number of cig/day smoked. Settings: Staff from LIAF contacted 10 vape shops in the province of the city of Catania (Italy) that acted as sponsors to the 2013 No Tobacco Day. Participants: 71 adult smokers (≥18 years old) making their first purchase at local participating vape shops were asked by professional retail staff to complete a form. Measurements: Their cigarette consumption was followed-up prospectively at 6 and 12 months. Details of products purchase (i.e., e-cigs hardware, e-liquid nicotine strengths and flavours) were also noted. Findings: Retention rate was elevated, with 69% of participants attending their final follow-up visit. At 12 month, 40.8% subjects could be classified as quitters, 25.4% as reducers and 33.8% as failures. Switching from standard refillables (initial choice) to more advanced devices (MODs) was observed in this study (from 8.5% at baseline to 18.4% at 12 month) as well as a trend in decreasing the e-liquid nicotine strength, with more participants adopting low nicotine strength (from 49.3% at baseline to 57.1% at 12 month). Conclusions: We have found that smokers purchasing e-cigarettes from vape shops with professional advice and support can achieve high success rates. PMID:25811767
Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis
NASA Technical Reports Server (NTRS)
Carpenter, P.
2006-01-01
Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.
Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2005-01-01
A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.
A New Look at NASA: Strategic Research In Information Technology
NASA Technical Reports Server (NTRS)
Alfano, David; Tu, Eugene (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.
Certification Process for Commercial Batteries for Payloads
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith
2007-01-01
This viewgraph document reviews the use of electric batteries in space applications. Batteries are high energy devices that are used to power hardware for space applications The applications include IVA (Intra-Vehicular Activity) and EVA (Extra-Vehicular Activity) use. High energy batteries pose hazards such as cell/battery venting leading to electrolyte (liquid or gas) leakage, high temperatures, fire and explosion (shrapnel). It reviews the process of certifying of Commercial batteries for space applications in view of the multi-national purchasing for the International Space Station. The documentation used in the certification is reviewed.
A multiarchitecture parallel-processing development environment
NASA Technical Reports Server (NTRS)
Townsend, Scott; Blech, Richard; Cole, Gary
1993-01-01
A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Todd M.; Benjamin, Jacob S.; Wright, Virginia L.
This paper will describe a practical methodology for understanding the cyber risk of a digital asset. This research attempts to gain a greater understanding of the cyber risk posed by a hardware-based computer asset by considering it as a sum of its hardware and software based sub-components.
Systems Suitable for Information Professionals.
ERIC Educational Resources Information Center
Blair, John C., Jr.
1983-01-01
Describes computer operating systems applicable to microcomputers, noting hardware components, advantages and disadvantages of each system, local area networks, distributed processing, and a fully configured system. Lists of hardware components (disk drives, solid state disk emulators, input/output and memory components, and processors) and…
Micros for the 1990's: An Update.
ERIC Educational Resources Information Center
Grosch, Audrey N.
1991-01-01
Discusses new hardware and software developments for microcomputers and considers strategies for future library microcomputing. Topics discussed include developments with Macintosh computers; the importance of local area networks (LANs); upgrading options for hardware; operating system upgrades; dynamic data exchange (DDE); microcomputer…
Upgrades at the NASA Langley Research Center National Transonic Facility
NASA Technical Reports Server (NTRS)
Paryz, Roman W.
2012-01-01
Several projects have been completed or are nearing completion at the NASA Langley Research Center (LaRC) National Transonic Facility (NTF). The addition of a Model Flow-Control/Propulsion Simulation test capability to the NTF provides a unique, transonic, high-Reynolds number test capability that is well suited for research in propulsion airframe integration studies, circulation control high-lift concepts, powered lift, and cruise separation flow control. A 1992 vintage Facility Automation System (FAS) that performs the control functions for tunnel pressure, temperature, Mach number, model position, safety interlock and supervisory controls was replaced using current, commercially available components. This FAS upgrade also involved a design study for the replacement of the facility Mach measurement system and the development of a software-based simulation model of NTF processes and control systems. The FAS upgrades were validated by a post upgrade verification wind tunnel test. The data acquisition system (DAS) upgrade project involves the design, purchase, build, integration, installation and verification of a new DAS by replacing several early 1990's vintage computer systems with state of the art hardware/software. This paper provides an update on the progress made in these efforts. See reference 1.
Orbiter subsystem hardware/software interaction analysis. Volume 8: Forward reaction control system
NASA Technical Reports Server (NTRS)
Becker, D. D.
1980-01-01
The results of the orbiter hardware/software interaction analysis for the AFT reaction control system are presented. The interaction between hardware failure modes and software are examined in order to identify associated issues and risks. All orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are discussed.
2008-03-01
executables. The current roadblock to detecting Type I Malware consistantly is the practice of legitimate software , such as antivirus programs, using this... Software Security Systems . . 31 3.2.2 Advantages of Hardware . . . . . . . . . . . . . 32 3.2.3 Trustworthiness of Information . . . . . . . . . 33...Towards a Hardware Security Backplane . . . . . . . . . 42 IV. Review of State of the Art Computer Security Solutions . . . . . 46 4.1 Software
Current trends in hardware and software for brain-computer interfaces (BCIs)
NASA Astrophysics Data System (ADS)
Brunner, P.; Bianchi, L.; Guger, C.; Cincotti, F.; Schalk, G.
2011-04-01
A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.
ELIPS: Toward a Sensor Fusion Processor on a Chip
NASA Technical Reports Server (NTRS)
Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James
1998-01-01
The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.
French, Simone A; Rydell, Sarah A; Mitchell, Nathan R; Michael Oakes, J; Elbel, Brian; Harnack, Lisa
2017-09-16
This research evaluated the effects of financial incentives and purchase restrictions on food purchasing in a food benefit program for low income people. Participants (n=279) were randomized to groups: 1) Incentive- 30% financial incentive for fruits and vegetables purchased with food benefits; 2) Restriction- no purchase of sugar-sweetened beverages, sweet baked goods, or candies with food benefits; 3) Incentive plus Restriction; or 4) Control- no incentive or restrictions. Participants received a study-specific debit card where funds were added monthly for 12-weeks. Food purchase receipts were collected over 16 weeks. Total dollars spent on grocery purchases and by targeted food categories were computed from receipts. Group differences were examined using general linear models. Weekly purchases of fruit significantly increased in the Incentive plus Restriction ($4.8) compared to the Restriction ($1.7) and Control ($2.1) groups (p <.01). Sugar-sweetened beverage purchases significantly decreased in the Incentive plus Restriction (-$0.8 per week) and Restriction ($-1.4 per week) groups compared to the Control group (+$1.5; p< .0001). Sweet baked goods purchases significantly decreased in the Restriction (-$0.70 per week) compared to the Control group (+$0.82 per week; p < .01). Paired financial incentives and restrictions on foods and beverages purchased with food program funds may support more healthful food purchases compared to no incentives or restrictions. Clinicaltrials.gov Identifier: NCT02643576 .
16 CFR 255.0 - Purpose and definitions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... individuals generally acquire. Example 1: A film critic's review of a movie is excerpted in an advertisement... Guides. Assume that rather than purchase the dog food with her own money, the consumer gets it for free because the store routinely tracks her purchases and its computer has generated a coupon for a free trial...
16 CFR 255.0 - Purpose and definitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... individuals generally acquire. Example 1: A film critic's review of a movie is excerpted in an advertisement... Guides. Assume that rather than purchase the dog food with her own money, the consumer gets it for free because the store routinely tracks her purchases and its computer has generated a coupon for a free trial...
16 CFR 255.0 - Purpose and definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... individuals generally acquire. Example 1: A film critic's review of a movie is excerpted in an advertisement... Guides. Assume that rather than purchase the dog food with her own money, the consumer gets it for free because the store routinely tracks her purchases and its computer has generated a coupon for a free trial...
16 CFR 255.0 - Purpose and definitions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... individuals generally acquire. Example 1: A film critic's review of a movie is excerpted in an advertisement... Guides. Assume that rather than purchase the dog food with her own money, the consumer gets it for free because the store routinely tracks her purchases and its computer has generated a coupon for a free trial...
Key Issues in Instructional Computer Graphics.
ERIC Educational Resources Information Center
Wozny, Michael J.
1981-01-01
Addresses key issues facing universities which plan to establish instructional computer graphics facilities, including computer-aided design/computer aided manufacturing systems, role in curriculum, hardware, software, writing instructional software, faculty involvement, operations, and research. Thirty-seven references and two appendices are…
75 FR 25185 - Broadband Initiatives Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
..., excluding desktop or laptop computers, computer hardware and software (including anti-virus, anti-spyware, and other security software), audio or video equipment, computer network components... 10 desktop or laptop computers and individual workstations to be located within the rural library...
Education: AIChE Probes Impact of Computer on Future Engineering Education.
ERIC Educational Resources Information Center
Krieger, James
1983-01-01
Evaluates influence of computer assisted instruction on engineering education, considering use of computers to remove burden of doing calculations and to provide interactive self-study programs of a tutorial/remedial nature. Cites universities requiring personal computer purchase, pointing out possibility for individualized design assignments.…
Garfield Computer Survey-1983.
ERIC Educational Resources Information Center
Semple, Ed, Jr.
In November 1983, a questionnaire was mailed to 1,761 addresses in the J. A. Garfield school district to ascertain citizens' awareness of computers in schools and their support for school computer purchases and provision of instruction in computer programming. A total of 125 questionnaires were returned (a 7.09% response rate). Findings showed…
NASA Technical Reports Server (NTRS)
Johnson, Paul K.
2007-01-01
NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.
Managing coherence via put/get windows
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY
2011-01-11
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
Managing coherence via put/get windows
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY
2012-02-21
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system
NASA Astrophysics Data System (ADS)
Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook
2017-10-01
Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.
Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system.
Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook
2017-10-06
Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.
An Agent Inspired Reconfigurable Computing Implementation of a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Weir, John M.; Wells, B. Earl
2003-01-01
Many software systems have been successfully implemented using an agent paradigm which employs a number of independent entities that communicate with one another to achieve a common goal. The distributed nature of such a paradigm makes it an excellent candidate for use in high speed reconfigurable computing hardware environments such as those present in modem FPGA's. In this paper, a distributed genetic algorithm that can be applied to the agent based reconfigurable hardware model is introduced. The effectiveness of this new algorithm is evaluated by comparing the quality of the solutions found by the new algorithm with those found by traditional genetic algorithms. The performance of a reconfigurable hardware implementation of the new algorithm on an FPGA is compared to traditional single processor implementations.
NASA Technical Reports Server (NTRS)
Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David
2005-01-01
We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.
New Directions for Hardware-assisted Trusted Computing Policies (Position Paper)
NASA Astrophysics Data System (ADS)
Bratus, Sergey; Locasto, Michael E.; Ramaswamy, Ashwin; Smith, Sean W.
The basic technological building blocks of the TCG architecture seem to be stabilizing. As a result, we believe that the focus of the Trusted Computing (TC) discipline must naturally shift from the design and implementation of the hardware root of trust (and the subsequent trust chain) to the higher-level application policies. Such policies must build on these primitives to express new sets of security goals. We highlight the relationship between enforcing these types of policies and debugging, since both activities establish the link between expected and actual application behavior. We argue that this new class of policies better fits developers' mental models of expected application behaviors, and we suggest a hardware design direction for enabling the efficient interpretation of such policies.
Robotic laboratory for distance education
NASA Astrophysics Data System (ADS)
Luciano, Sarah C.; Kost, Alan R.
2016-09-01
This project involves the construction of a remote-controlled laboratory experiment that can be accessed by online students. The project addresses a need to provide a laboratory experience for students who are taking online courses to be able to provide an in-class experience. The chosen task for the remote user is an optical engineering experiment, specifically aligning a spatial filter. We instrument the physical laboratory set up in Tucson, AZ at the University of Arizona. The hardware in the spatial filter experiment is augmented by motors and cameras to allow the user to remotely control the hardware. The user interacts with a software on their computer, which communicates with a server via Internet connection to the host computer in the Optics Laboratory at the University of Arizona. Our final overall system is comprised of several subsystems. These are the optical experiment set-up, which is a spatial filter experiment; the mechanical subsystem, which interfaces the motors with the micrometers to move the optical hardware; the electrical subsystem, which allows for the electrical communications from the remote computer to the host computer to the hardware; and finally the software subsystem, which is the means by which messages are communicated throughout the system. The goal of the project is to convey as much of an in-lab experience as possible by allowing the user to directly manipulate hardware and receive visual feedback in real-time. Thus, the remote user is able to learn important concepts from this particular experiment and is able to connect theory to the physical world by actually seeing the outcome of a procedure. The latter is a learning experience that is often lost with distance learning and is one that this project hopes to provide.
Quantum Heterogeneous Computing for Satellite Positioning Optimization
NASA Astrophysics Data System (ADS)
Bass, G.; Kumar, V.; Dulny, J., III
2016-12-01
Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.
Locating hardware faults in a parallel computer
Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.
2010-04-13
Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.
Scientific Services on the Cloud
NASA Astrophysics Data System (ADS)
Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong
Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
ERIC Educational Resources Information Center
King, Kenneth M.
1988-01-01
Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
Merlin - Massively parallel heterogeneous computing
NASA Technical Reports Server (NTRS)
Wittie, Larry; Maples, Creve
1989-01-01
Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.
Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali
Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less
Hardware device binding and mutual authentication
Hamlet, Jason R; Pierson, Lyndon G
2014-03-04
Detection and deterrence of device tampering and subversion by substitution may be achieved by including a cryptographic unit within a computing device for binding multiple hardware devices and mutually authenticating the devices. The cryptographic unit includes a physically unclonable function ("PUF") circuit disposed in or on the hardware device, which generates a binding PUF value. The cryptographic unit uses the binding PUF value during an enrollment phase and subsequent authentication phases. During a subsequent authentication phase, the cryptographic unit uses the binding PUF values of the multiple hardware devices to generate a challenge to send to the other device, and to verify a challenge received from the other device to mutually authenticate the hardware devices.
Real-time skin feature identification in a time-sequential video stream
NASA Astrophysics Data System (ADS)
Kramberger, Iztok
2005-04-01
Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-19
... expenses (purchases; and operating leases and rental payments) for four types of information and communication technology equipment and software (computers and peripheral equipment; ICT equipment, excluding computers and peripherals; electromedical and electrotherapeutic apparatus; and computer software, including...
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2008-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085
Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2009-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator
Asaad, Sameth W.; Kapur, Mohit
2016-01-05
A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
Handheld computing in pathology
Park, Seung; Parwani, Anil; Satyanarayanan, Mahadev; Pantanowitz, Liron
2012-01-01
Handheld computing has had many applications in medicine, but relatively few in pathology. Most reported uses of handhelds in pathology have been limited to experimental endeavors in telemedicine or education. With recent advances in handheld hardware and software, along with concurrent advances in whole-slide imaging (WSI), new opportunities and challenges have presented themselves. This review addresses the current state of handheld hardware and software, provides a history of handheld devices in medicine focusing on pathology, and presents future use cases for such handhelds in pathology. PMID:22616027
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dawson, Andrew
2017-03-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter; Dawson, Andrew
2017-04-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.
MO-AB-204-00: Interoperability in Radiation Oncology: IHE-RO Committee Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
You’ve experienced the frustration: vendor A’s device claims to work with vendor B’s device, but the practice doesn’t match the promise. Getting devices working together is the hidden art that Radiology and Radiation Oncology staff have to master. To assist with that difficult process, the Integrating the Healthcare Enterprise (IHE) effort was established in 1998, with the coordination of the Radiological Society of North America. Integrating the Healthcare Enterprise (IHE) is a consortium of healthcare professionals and industry partners focused on improving the way computer systems interconnect and exchange information. This is done by coordinating the use of published standardsmore » like DICOM and HL7. Several clinical and operational IHE domains exist in the healthcare arena, including Radiology and Radiation Oncology. The ASTRO-sponsored IHE Radiation Oncology (IHE-RO) domain focuses on radiation oncology specific information exchange. This session will explore the IHE Radiology and IHE RO process for; IHE solicitation process for new profiles. Improving the way computer systems interconnect and exchange information in the healthcare enterprise Supporting interconnectivity descriptions and proof of adherence by vendors Testing and assuring the vendor solutions to connectivity problems. Including IHE profiles in RFPs for future software and hardware purchases. Learning Objectives: Understand IHE role in improving interoperability in health care. Understand process of profile development and implantation. Understand how vendors prove adherence to IHE RO profiles. S. Hadley, ASTRO Supported Activity.« less
Lunar Applications in Reconfigurable Computing
NASA Technical Reports Server (NTRS)
Somervill, Kevin
2008-01-01
NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.
NASA Astrophysics Data System (ADS)
Laracuente, Nicholas; Grossman, Carl
2013-03-01
We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College
A fast, programmable hardware architecture for spaceborne SAR processing
NASA Technical Reports Server (NTRS)
Bennett, J. R.; Cumming, I. G.; Lim, J.; Wedding, R. M.
1983-01-01
The launch of spaceborne SARs during the 1980's is discussed. The satellite SARs require high quality and high throughput ground processors. Compression ratios in range and azimuth of greater than 500 and 150 respectively lead to frequency domain processing and data computation rates in excess of 2000 million real operations per second for C-band SARs under consideration. Various hardware architectures are examined and two promising candidates and proceeds to recommend a fast, programmable hardware architecture for spaceborne SAR processing are selected. Modularity and programmability are introduced as desirable attributes for the purpose of HTSP hardware selection.
Stephens, Byron F; Hebert, Casey T; Azar, Frederick M; Mihalko, William M; Throckmorton, Thomas W
2015-09-01
Baseplate loosening in reverse total shoulder arthroplasty (RTSA) remains a concern. Placing peripheral screws into the 3 pillars of the densest scapular bone is believed to optimize baseplate fixation. Using a 3-dimensional computer-aided design (3D CAD) program, we investigated the optimal rotational baseplate alignment to maximize peripheral locking-screw purchase. Seventy-three arthritic scapulae were reconstructed from computed tomography images and imported into a 3D CAD software program along with representations of an RTSA baseplate that uses 4 fixed-angle peripheral locking screws. The baseplate position was standardized, and the baseplate was rotated to maximize individual and combined peripheral locking-screw purchase in each of the 3 scapular pillars. The mean ± standard error of the mean positions for optimal individual peripheral locking-screw placement (referenced in internal rotation) were 6° ± 2° for the coracoid pillar, 198° ± 2° for the inferior pillar, and 295° ± 3° for the scapular spine pillar. Of note, 78% (57 of 73) of the screws attempting to obtain purchase in the scapular spine pillar could not be placed without an in-out-in configuration. In contrast, 100% of coracoid and 99% of inferior pillar screws achieved full purchase. The position of combined maximal fixation was 11° ± 1°. These results suggest that approximately 11° of internal rotation is the ideal baseplate position for maximal peripheral locking-screw fixation in RTSA. In addition, these results highlight the difficulty in obtaining optimal purchase in the scapular spine. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Instrument Systems Analysis and Verification Facility (ISAVF) users guide
NASA Technical Reports Server (NTRS)
Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.
1985-01-01
The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.
NASA Technical Reports Server (NTRS)
Tomayko, James E.
1986-01-01
Twenty-five years of spacecraft onboard computer development have resulted in a better understanding of the requirements for effective, efficient, and fault tolerant flight computer systems. Lessons from eight flight programs (Gemini, Apollo, Skylab, Shuttle, Mariner, Voyager, and Galileo) and three reserach programs (digital fly-by-wire, STAR, and the Unified Data System) are useful in projecting the computer hardware configuration of the Space Station and the ways in which the Ada programming language will enhance the development of the necessary software. The evolution of hardware technology, fault protection methods, and software architectures used in space flight in order to provide insight into the pending development of such items for the Space Station are reviewed.
Real time computer data system for the 40 x 80 ft wind tunnel facility at Ames Research Center
NASA Technical Reports Server (NTRS)
Cambra, J. M.; Tolari, G. P.
1974-01-01
The wind tunnel realtime computer system is a distributed data gathering system that features a master computer subsystem, a high speed data gathering subsystem, a quick look dynamic analysis and vibration control subsystem, an analog recording back-up subsystem, a pulse code modulation (PCM) on-board subsystem, a communications subsystem, and a transducer excitation and calibration subsystem. The subsystems are married to the master computer through an executive software system and standard hardware and FORTRAN software interfaces. The executive software system has four basic software routines. These are the playback, setup, record, and monitor routines. The standard hardware interfaces along with the software interfaces provide the system with the capability of adapting to new environments.
Limits on fundamental limits to computation.
Markov, Igor L
2014-08-14
An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.
26 CFR 1.163-2 - Installment purchases where interest charge is not separately stated.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (a) In general. (1) Whenever there is a contract with a seller for the purchase of personal property... payments to be treated as interest shall be equal to 6 percent of the average unpaid balance under the contract during the taxable year. For purposes of this computation, the average unpaid balance under the...
Automated Counting of Particles To Quantify Cleanliness
NASA Technical Reports Server (NTRS)
Rhode, James
2005-01-01
A machine vision system, similar to systems used in microbiological laboratories to count cultured microbes, has been proposed for quantifying the cleanliness of nominally precisely cleaned hardware by counting residual contaminant particles. The system would include a microscope equipped with an electronic camera and circuitry to digitize the camera output, a personal computer programmed with machine-vision and interface software, and digital storage media. A filter pad, through which had been aspirated solvent from rinsing the hardware in question, would be placed on the microscope stage. A high-resolution image of the filter pad would be recorded. The computer would analyze the image and present a histogram of sizes of particles on the filter. On the basis of the histogram and a measure of the desired level of cleanliness, the hardware would be accepted or rejected. If the hardware were accepted, the image would be saved, along with other information, as a quality record. If the hardware were rejected, the histogram and ancillary information would be recorded for analysis of trends. The software would perceive particles that are too large or too numerous to meet a specified particle-distribution profile. Anomalous particles or fibrous material would be flagged for inspection.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
Lam, Desmond; Mizerski, Richard
2017-06-01
The objective of this study is to explore the gambling participations and game purchase duplication of light regular, heavy regular and pathological gamblers by applying the Duplication of Purchase Law. Current study uses data collected by the Australian Productivity Commission for eight different types of games. Key behavioral statistics on light regular, heavy regular, and pathological gamblers were computed and compared. The key finding is that pathological gambling, just like regular gambling, follows the Duplication of Purchase Law, which states that the dominant factor of purchase duplication between two brands is their market shares. This means that gambling between any two games at pathological level, like any regular consumer purchases, exhibits "law-like" regularity based on the pathological gamblers' participation rate of each game. Additionally, pathological gamblers tend to gamble more frequently across all games except lotteries and instant as well as make greater cross-purchases compared to heavy regular gamblers. A better understanding of the behavioral traits between regular (particularly heavy regular) and pathological gamblers can be useful to public policy makers and social marketers in order to more accurately identify such gamblers and better manage the negative impacts of gambling.
Cullen, Karen; Baranowski, Tom; Watson, Kathy; Nicklas, Theresa; Fisher, Jennifer; O'Donnell, Sharon; Baranowski, Janice; Islam, Noemi; Missaghian, Mariam
2007-10-01
To characterize food group purchases from grocery receipts. Food shoppers (aged>or=19 years with at least one child aged
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
Design Process of a Goal-Based Scenario on Computing Fundamentals
ERIC Educational Resources Information Center
Beriswill, Joanne Elizabeth
2014-01-01
In this design case, an instructor developed a goal-based scenario (GBS) for undergraduate computer fundamentals students to apply their knowledge of computer equipment and software. The GBS, entitled the MegaTech Project, presented the students with descriptions of the everyday activities of four persons needing to purchase a computer system. The…
Using Personal Computers To Acquire Special Education Information. Revised. ERIC Digest #429.
ERIC Educational Resources Information Center
ERIC Clearinghouse on Handicapped and Gifted Children, Reston, VA.
This digest offers basic information about resources, available to users of personal computers, in the area of professional development in special education. Two types of resources are described: those that can be purchased on computer diskettes and those made available by linking personal computers through electronic telephone networks. Resources…
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-03-28
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-01-01
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358
Exploiting current-generation graphics hardware for synthetic-scene generation
NASA Astrophysics Data System (ADS)
Tanner, Michael A.; Keen, Wayne A.
2010-04-01
Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
Results of solar electric thrust vector control system design, development and tests
NASA Technical Reports Server (NTRS)
Fleischer, G. E.
1973-01-01
Efforts to develop and test a thrust vector control system TVCS for a solar-energy-powered ion engine array are described. The results of solar electric propulsion system technology (SEPST) III real-time tests of present versions of TVCS hardware in combination with computer-simulated attitude dynamics of a solar electric multi-mission spacecraft (SEMMS) Phase A-type spacecraft configuration are summarized. Work on an improved solar electric TVCS, based on the use of a state estimator, is described. SEPST III tests of TVCS hardware have generally proved successful and dynamic response of the system is close to predictions. It appears that, if TVCS electronic hardware can be effectively replaced by control computer software, a significant advantage in control capability and flexibility can be gained in future developmental testing, with practical implications for flight systems as well. Finally, it is concluded from computer simulations that TVCS stabilization using rate estimation promises a substantial performance improvement over the present design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warkentin, H; Bubric, K; Giovannetti, H
2016-06-15
Purpose: As a quality improvement measure, we undertook this work to incorporate usability testing into the implementation procedures for new electronic documents and forms used by four affiliated radiation therapy centers. Methods: A human factors specialist provided training in usability testing for a team of medical physicists, radiation therapists, and radiation oncologists from four radiotherapy centers. A usability testing plan was then developed that included controlled scenarios and standardized forms for qualitative and quantitative feedback from participants, including patients. Usability tests were performed by end users using the same hardware and viewing conditions that are found in the clinical environment.more » A pilot test of a form used during radiotherapy CT simulation was performed in a single department; feedback informed adaptive improvements to the electronic form, hardware requirements, resource accessibility and the usability testing plan. Following refinements to the testing plan, usability testing was performed at three affiliated cancer centers with different vault layouts and hardware. Results: Feedback from the testing resulted in the detection of 6 critical errors (omissions and inability to complete task without assistance), 6 non-critical errors (recoverable), and multiple suggestions for improvement. Usability problems with room layout were detected at one center and problems with hardware were detected at one center. Upon amalgamation and summary of the results, three key recommendations were presented to the document’s authors for incorporation into the electronic form. Documented inefficiencies and patient safety concerns related to the room layout and hardware were presented to administration along with a request for funding to purchase upgraded hardware and accessories to allow a more efficient workflow within the simulator vault. Conclusion: By including usability testing as part of the process when introducing any new document or procedure into clinical use, associated risks can be identified and mitigated before patient care and clinical workflow are impacted.« less
ERIC Educational Resources Information Center
Barker, Philip
1986-01-01
Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…
ERIC Educational Resources Information Center
Stone, Antonia
1982-01-01
Provides general information on currently available microcomputers, computer programs (software), hardware requirements, software sources, costs, computer games, and programing. Includes a list of popular microcomputers, providing price category, model, list price, software (cassette, tape, disk), monitor specifications, amount of random access…
Airborne Electro-Optical Sensor Simulation System. Final Report.
ERIC Educational Resources Information Center
Hayworth, Don
The total system capability, including all the special purpose and general purpose hardware comprising the Airborne Electro-Optical Sensor Simulation (AEOSS) System, is described. The functional relationship between hardware portions is described together with interface to the software portion of the computer image generation. Supporting rationale…
Incorporating a Human-Computer Interaction Course into Software Development Curriculums
ERIC Educational Resources Information Center
Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph
2015-01-01
Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…
Software Design Improvements. Part 1; Software Benefits and Limitations
NASA Technical Reports Server (NTRS)
Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom
1997-01-01
Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?
Addressing failures in exascale computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snir, Marc; Wisniewski, Robert W.; Abraham, Jacob A.
2014-05-01
We present here a report produced by a workshop on “Addressing Failures in Exascale Computing” held in Park City, Utah, August 4–11, 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system; discuss existing knowledge on resilience across the various hardware and software layers of an exascale system; and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, andmore » academia; and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.« less
An iterative approach to region growing using associative memories
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Cowart, A.
1983-01-01
Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.
Simplifying and speeding the management of intra-node cache coherence
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY
2012-04-17
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
Managing coherence via put/get windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A; Chen, Dong; Coteus, Paul W
A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)
1992-01-01
High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.
Demonstration Advanced Avionics System (DAAS) functional description. [Cessna 402B aircraft
NASA Technical Reports Server (NTRS)
1980-01-01
A comprehensive set of general aviation avionics were defined for integration into an advanced hardware mechanization for demonstration in a Cessna 402B aircraft. Block diagrams are shown and system and computer architecture as well as significant hardware elements are described. The multifunction integrated data control center and electronic horizontal situation indicator are discussed. The functions that the DAAS will perform are examined. This function definition is the basis for the DAAS hardware and software design.
birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.
Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir
2011-05-01
birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.
Computer graphics and the graphic artist
NASA Technical Reports Server (NTRS)
Taylor, N. L.; Fedors, E. G.; Pinelli, T. E.
1985-01-01
A centralized computer graphics system is being developed at the NASA Langley Research Center. This system was required to satisfy multiuser needs, ranging from presentation quality graphics prepared by a graphic artist to 16-mm movie simulations generated by engineers and scientists. While the major thrust of the central graphics system was directed toward engineering and scientific applications, hardware and software capabilities to support the graphic artists were integrated into the design. This paper briefly discusses the importance of computer graphics in research; the central graphics system in terms of systems, software, and hardware requirements; the application of computer graphics to graphic arts, discussed in terms of the requirements for a graphic arts workstation; and the problems encountered in applying computer graphics to the graphic arts. The paper concludes by presenting the status of the central graphics system.
11 CFR 9003.6 - Production of computer information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 11 Federal Elections 1 2012-01-01 2012-01-01 false Production of computer information. 9003.6... ELECTION FINANCING ELIGIBILITY FOR PAYMENTS § 9003.6 Production of computer information. (a) Categories of...) Records relating to the costs of producing broadcast communications and purchasing airtime; (5) Records...
11 CFR 9003.6 - Production of computer information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 11 Federal Elections 1 2013-01-01 2012-01-01 true Production of computer information. 9003.6... ELECTION FINANCING ELIGIBILITY FOR PAYMENTS § 9003.6 Production of computer information. (a) Categories of...) Records relating to the costs of producing broadcast communications and purchasing airtime; (5) Records...
11 CFR 9003.6 - Production of computer information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 11 Federal Elections 1 2011-01-01 2011-01-01 false Production of computer information. 9003.6... ELECTION FINANCING ELIGIBILITY FOR PAYMENTS § 9003.6 Production of computer information. (a) Categories of...) Records relating to the costs of producing broadcast communications and purchasing airtime; (5) Records...
11 CFR 9003.6 - Production of computer information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 11 Federal Elections 1 2014-01-01 2014-01-01 false Production of computer information. 9003.6... ELECTION FINANCING ELIGIBILITY FOR PAYMENTS § 9003.6 Production of computer information. (a) Categories of...) Records relating to the costs of producing broadcast communications and purchasing airtime; (5) Records...
48 CFR 1852.227-19 - Commercial computer software-Restricted rights.
Code of Federal Regulations, 2013 CFR
2013-10-01
... software-Restricted rights. 1852.227-19 Section 1852.227-19 Federal Acquisition Regulations System NATIONAL... Provisions and Clauses 1852.227-19 Commercial computer software—Restricted rights. (a) As prescribed in 1827... regarding any computer software delivered under this contract/purchase order, the NASA Contracting Officer...
Web-Based Job Submission Interface for the GAMESS Computational Chemistry Program
ERIC Educational Resources Information Center
Perri, M. J.; Weber, S. H.
2014-01-01
A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.
48 CFR 1852.227-19 - Commercial computer software-Restricted rights.
Code of Federal Regulations, 2010 CFR
2010-10-01
... software-Restricted rights. 1852.227-19 Section 1852.227-19 Federal Acquisition Regulations System NATIONAL... Provisions and Clauses 1852.227-19 Commercial computer software—Restricted rights. (a) As prescribed in 1827... regarding any computer software delivered under this contract/purchase order, the NASA Contracting Officer...
48 CFR 1852.227-19 - Commercial computer software-Restricted rights.
Code of Federal Regulations, 2011 CFR
2011-10-01
... software-Restricted rights. 1852.227-19 Section 1852.227-19 Federal Acquisition Regulations System NATIONAL... Provisions and Clauses 1852.227-19 Commercial computer software—Restricted rights. (a) As prescribed in 1827... regarding any computer software delivered under this contract/purchase order, the NASA Contracting Officer...
48 CFR 1852.227-19 - Commercial computer software-Restricted rights.
Code of Federal Regulations, 2012 CFR
2012-10-01
... software-Restricted rights. 1852.227-19 Section 1852.227-19 Federal Acquisition Regulations System NATIONAL... Provisions and Clauses 1852.227-19 Commercial computer software—Restricted rights. (a) As prescribed in 1827... regarding any computer software delivered under this contract/purchase order, the NASA Contracting Officer...
48 CFR 1852.227-19 - Commercial computer software-Restricted rights.
Code of Federal Regulations, 2014 CFR
2014-10-01
... software-Restricted rights. 1852.227-19 Section 1852.227-19 Federal Acquisition Regulations System NATIONAL... Provisions and Clauses 1852.227-19 Commercial computer software—Restricted rights. (a) As prescribed in 1827... regarding any computer software delivered under this contract/purchase order, the NASA Contracting Officer...
Examining the architecture of cellular computing through a comparative study with a computer
Wang, Degeng; Gribskov, Michael
2005-01-01
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed. PMID:16849179
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
CRYSNET manual. Informal report. [Hardware and software of crystallographic computing network
DOE Office of Scientific and Technical Information (OSTI.GOV)
None,
1976-07-01
This manual describes the hardware and software which together make up the crystallographic computing network (CRYSNET). The manual is intended as a users' guide and also provides general information for persons without any experience with the system. CRYSNET is a network of intelligent remote graphics terminals that are used to communicate with the CDC Cyber 70/76 computing system at the Brookhaven National Laboratory (BNL) Central Scientific Computing Facility. Terminals are in active use by four research groups in the field of crystallography. A protein data bank has been established at BNL to store in machine-readable form atomic coordinates and othermore » crystallographic data for macromolecules. The bank currently includes data for more than 20 proteins. This structural information can be accessed at BNL directly by the CRYSNET graphics terminals. More than two years of experience has been accumulated with CRYSNET. During this period, it has been demonstrated that the terminals, which provide access to a large, fast third-generation computer, plus stand-alone interactive graphics capability, are useful for computations in crystallography, and in a variety of other applications as well. The terminal hardware, the actual operations of the terminals, and the operations of the BNL Central Facility are described in some detail, and documentation of the terminal and central-site software is given. (RWR)« less
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Impact of Technology on the University of Miami.
ERIC Educational Resources Information Center
Little, Robert O.; Temares, M. Lewis
As part of a long-range information systems planning effort at the University of Miami, the impact of technology on the organization was assessed. The assessment covered hardware, office automation, systems and database software, and communications. The trends in computer hardware point toward continued decreasing size and cost, placing computer…
Large scale systems : a study of computer organizations for air traffic control applications.
DOT National Transportation Integrated Search
1971-06-01
Based on current sizing estimates and tracking algorithms, some computer organizations applicable to future air traffic control computing systems are described and assessed. Hardware and software problem areas are defined and solutions are outlined.
Computerizing the Accounting Curriculum.
ERIC Educational Resources Information Center
Nash, John F.; England, Thomas G.
1986-01-01
Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)
ERIC Educational Resources Information Center
Moore, John W., Ed.
1986-01-01
Presents six brief articles dealing with the use of computers in teaching various topics in chemistry. Describes hardware and software applications which relate to protein graphics, computer simulated metabolism, interfaces between microcomputers and measurement devices, courseware available for spectrophotometers, and the calculation of elemental…
35 Ways to Take a "Byte" out of Software Costs. Fund Raising Ideas from COMPress Customers.
ERIC Educational Resources Information Center
COMPress, Wentworth, NH.
Based on a survey sponsored by COMPress Quarterly of various schools to determine the extent of the problem of lack of funds for purchasing computer software and how schools have coped with the problem, this booklet describes numerous ways to raise funds for software purchases. Nearly 1,000 questionnaires were returned and this booklet was…
Introduction to Computer Aided Instruction in the Language Laboratory.
ERIC Educational Resources Information Center
Hughett, Harvey L.
The first half of this book focuses on the rationale, ideas, and information for the use of technology, including microcomputers, to improve language teaching efficiency. Topics discussed include foreign language computer assisted instruction (CAI), hardware and software selection, computer literacy, educational computing organizations, ease of…
Hardware Considerations for Computer Based Education in the 1980's.
ERIC Educational Resources Information Center
Hirschbuhl, John J.
1980-01-01
In the future, computers will be needed to sift through the vast proliferation of available information. Among new developments in computer technology are the videodisc microcomputers and holography. Predictions for future developments include laser libraries for the visually handicapped and Computer Assisted Dialogue. (JN)
Reactor Operations Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M.M.
1989-01-01
The Reactor Operations Monitoring System (ROMS) is a VME based, parallel processor data acquisition and safety action system designed by the Equipment Engineering Section and Reactor Engineering Department of the Savannah River Site. The ROMS will be analyzing over 8 million signal samples per minute. Sixty-eight microprocessors are used in the ROMS in order to achieve a real-time data analysis. The ROMS is composed of multiple computer subsystems. Four redundant computer subsystems monitor 600 temperatures with 2400 thermocouples. Two computer subsystems share the monitoring of 600 reactor coolant flows. Additional computer subsystems are dedicated to monitoring 400 signals from assortedmore » process sensors. Data from these computer subsystems are transferred to two redundant process display computer subsystems which present process information to reactor operators and to reactor control computers. The ROMS is also designed to carry out safety functions based on its analysis of process data. The safety functions include initiating a reactor scram (shutdown), the injection of neutron poison, and the loadshed of selected equipment. A complete development Reactor Operations Monitoring System has been built. It is located in the Program Development Center at the Savannah River Site and is currently being used by the Reactor Engineering Department in software development. The Equipment Engineering Section is designing and fabricating the process interface hardware. Upon proof of hardware and design concept, orders will be placed for the final five systems located in the three reactor areas, the reactor training simulator, and the hardware maintenance center.« less
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Common Sense Planning for a Computer, or, What's It Worth to You?
ERIC Educational Resources Information Center
Crawford, Walt
1984-01-01
Suggests factors to be considered in planning for the purchase of a microcomputer, including budgets, benefits, costs, and decisions. Major uses of a personal computer are described--word processing, financial analysis, file and database management, programming and computer literacy, education, entertainment, and thrill of high technology. (EJS)
Library and Classroom Use of Copyrighted Videotapes and Computer Software.
ERIC Educational Resources Information Center
Reed, Mary Hutchings; Stanek, Debra
1986-01-01
This pullout guide addresses issues regarding library use of copyrighted videotapes and computer software. Guidelines for videotapes cover in-classroom use, in-library use in public library, loan, and duplication. Guidelines for computer software cover purchase conditions, avoiding license restrictions, loaning, archival copies, and in-library and…
ERIC Educational Resources Information Center
Djang, Philipp A.
1993-01-01
Describes a Multiple Criteria Decision Analysis Approach for the selection of personal computers that combines the capabilities of Analytic Hierarchy Process and Integer Goal Programing. An example of how decision makers can use this approach to determine what kind of personal computers and how many of each type to purchase is given. (nine…
Cloud Computing: A Free Technology Option to Promote Collaborative Learning
ERIC Educational Resources Information Center
Siegle, Del
2010-01-01
In a time of budget cuts and limited funding, purchasing and installing the latest software on classroom computers can be prohibitive for schools. Many educators are unaware that a variety of free software options exist, and some of them do not actually require installing software on the user's computer. One such option is cloud computing. This…
"Micro" Politics: Mapping the Origins of Schools Computing as a Field of Education Policy
ERIC Educational Resources Information Center
Selwyn, Neil
2013-01-01
This paper examines the emergence of schools "micro-computing" in the UK between 1977 and 1984--a period of significant educational, technological and political change. During this time, computing developed rapidly from a niche activity in a few select schools to the state subsidized purchasing of a "computer in every school"…
Computer Information Retrieval for Journalists.
ERIC Educational Resources Information Center
Rodewald, Pam
1989-01-01
Discusses the use of computer information retrieval (on-line electronic search methods). Examines advantages and disadvantages of on-line searching versus manual searching. Offers questions to help in the decision to purchase and use on-line searching with students. (MS)
Help! What Computer Should I Buy?
ERIC Educational Resources Information Center
Braun, Ludwig
1981-01-01
A set of criteria is suggested that can be useful as a guide in making decisions about purchases of computers. The three best microcomputers for classroom use are viewed as the Apple, the PET, and the TRS-80. (MP)
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Zamanyan, Alen; Torri, Federica; Macciardi, Fabio; Hobel, Sam; Moon, Seok Woo; Sung, Young Hee; Jiang, Zhiguo; Labus, Jennifer; Kurth, Florian; Ashe-McNalley, Cody; Mayer, Emeran; Vespa, Paul M.; Van Horn, John D.; Toga, Arthur W.
2013-01-01
The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data. PMID:23975276
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
TSX-PLUS MULTI-TASKING UPGRADE FOR THE NICOLET L-11 POWDER DIFFRACTION SYSTEM.
Fitzpatrick, J.; Queen, David L.
1985-01-01
In August of 1982, a single-user, dual-translator, automated powder diffraction system was purchased by the Denver Research Institute for use on project work in the Chemical and Materials Sciences Division. Within a short period of time, the system had already become saturated with users. Scheduling conflicts arose. In view of these problems, an answer was sought in the form of hardware and software changes which would allow many users access to the system simultaneously. A low-cost, minimum impact solution was eventually found. The elements of the solution are reported.
7 CFR 1421.401 - DMA responsibilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... peanut MAL, and LDP program training offered by CCC. (4) Provide sufficient personnel, computer hardware, computer communications systems, and software, as determined necessary by CCC, to administer the peanut MAL...
7 CFR 1421.401 - DMA responsibilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... peanut MAL, and LDP program training offered by CCC. (4) Provide sufficient personnel, computer hardware, computer communications systems, and software, as determined necessary by CCC, to administer the peanut MAL...
7 CFR 1421.401 - DMA responsibilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... peanut MAL, and LDP program training offered by CCC. (4) Provide sufficient personnel, computer hardware, computer communications systems, and software, as determined necessary by CCC, to administer the peanut MAL...
Operators manual for a computer controlled impedance measurement system
NASA Astrophysics Data System (ADS)
Gordon, J.
1987-02-01
Operating instructions of a computer controlled impedance measurement system based in Hewlett Packard instrumentation are given. Hardware details, program listings, flowcharts and a practical application are included.
Facilities | Computational Science | NREL
technology innovation by providing scientists and engineers the ability to tackle energy challenges that scientists and engineers to take full advantage of advanced computing hardware and software resources
NASA Tech Briefs, June 1996. Volume 20, No. 6
NASA Technical Reports Server (NTRS)
1996-01-01
Topics: New Computer Hardware; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery/Automation; Manufacturing/Fabrication; Mathematics and Information Sciences;Books and Reports.
Personal Computers in Iowa Vocational Agriculture Programs: Competency Assessment and Usage.
ERIC Educational Resources Information Center
Miller, W. Wade; And Others
The competencies needed by Iowa vocational agriculture instructors at the secondary school level to integrate computer technology into the classroom were assessed, as well as the status of computer usage, types of computer use and software utilities and hardware used, and the sources of computer training obtained by instructors. Surveys were…
Networked Microcomputers--The Next Generation in College Computing.
ERIC Educational Resources Information Center
Harris, Albert L.
The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…
ERIC Educational Resources Information Center
Computing Teacher, 1985
1985-01-01
Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…
Cloud Based Educational Systems and Its Challenges and Opportunities and Issues
ERIC Educational Resources Information Center
Paul, Prantosh Kr.; Lata Dangwal, Kiran
2014-01-01
Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…
NASA Astrophysics Data System (ADS)
Astafiev, A.; Orlov, A.; Privezencev, D.
2018-01-01
The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.
Novel algorithm implementations in DARC: the Durham AO real-time controller
NASA Astrophysics Data System (ADS)
Basden, Alastair; Bitenc, Urban; Jenkins, David
2016-07-01
The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.
2000-06-01
real - time operating system and design of a human-computer interface (HCI) for a triple modular redundant (TMR) fault-tolerant microprocessor for use in space-based applications. Once disadvantage of using COTS hardware components is their susceptibility to the radiation effects present in the space environment. and specifically, radiation-induced single-event upsets (SEUs). In the event of an SEU, a fault-tolerant system can mitigate the effects of the upset and continue to process from the last known correct system state. The TMR basic hardware
An Application Development Platform for Neuromorphic Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, Mark; Chan, Jason; Daffron, Christopher
2016-01-01
Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.
Global synchronization of parallel processors using clock pulse width modulation
Chen, Dong; Ellavsky, Matthew R.; Franke, Ross L.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Jeanson, Mark J.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Littrell, Daniel; Ohmacht, Martin; Reed, Don D.; Schenck, Brandon E.; Swetz, Richard A.
2013-04-02
A circuit generates a global clock signal with a pulse width modification to synchronize processors in a parallel computing system. The circuit may include a hardware module and a clock splitter. The hardware module may generate a clock signal and performs a pulse width modification on the clock signal. The pulse width modification changes a pulse width within a clock period in the clock signal. The clock splitter may distribute the pulse width modified clock signal to a plurality of processors in the parallel computing system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas
Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.
Computer generated animation and movie production at LARC: A case study
NASA Technical Reports Server (NTRS)
Gates, R. L.; Matthews, C. G.; Vonofenheim, W. H.; Randall, D. P.; Jones, K. H.
1984-01-01
The process of producing computer generated 16mm movies using the MOVIE.BYU software package developed by Brigham Young University and the currently available hardware technology at the Langley Research Center is described. A general overview relates the procedures to a specific application. Details are provided which describe the data used, preparation of a storyboard, key frame generation, the actual animation, title generation, filming, and processing/developing the final product. Problems encountered in each of these areas are identified. Both hardware and software problems are discussed along with proposed solutions and recommendations.
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
NASA Technical Reports Server (NTRS)
Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.
1980-01-01
The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.
Space shuttle solid rocket booster cost-per-flight analysis technique
NASA Technical Reports Server (NTRS)
Forney, J. A.
1979-01-01
A cost per flight computer model is described which considers: traffic model, component attrition, hardware useful life, turnaround time for refurbishment, manufacturing rates, learning curves on the time to perform tasks, cost improvement curves on quantity hardware buys, inflation, spares philosophy, long lead, hardware funding requirements, and other logistics and scheduling constraints. Additional uses of the model include assessing the cost per flight impact of changing major space shuttle program parameters and searching for opportunities to make cost effective management decisions.
NASA Astrophysics Data System (ADS)
Bazdrov, I. I.; Bortkevich, V. S.; Khokhlov, V. N.
2004-10-01
This paper describes a software-hardware complex for the input into a personal computer of telemetric information obtained by means of telemetry stations TRAL KR28, RTS-8, and TRAL K2N. Structural and functional diagrams are given of the input device and the hardware complex. Results that characterize the features of the input process and selective data of optical measurements of atmospheric radiation are given. © 2004
STS-118 Astronaut Dave Williams Trains Using Virtual Reality Hardware
NASA Technical Reports Server (NTRS)
2007-01-01
STS-118 astronaut and mission specialist Dafydd R. 'Dave' Williams, representing the Canadian Space Agency, uses Virtual Reality Hardware in the Space Vehicle Mock Up Facility at the Johnson Space Center to rehearse some of his duties for the upcoming mission. This type of virtual reality training allows the astronauts to wear special gloves and other gear while looking at a computer that displays simulating actual movements around the various locations on the station hardware which with they will be working.
Development of a System to Validate Group 3 Facsimile Equipment. Phase I.
1981-07-01
such as modem , equalizer, line connection etc.) in hardware is. unavoidable. 3. Unless computer and test equipment are co-resident, hardware will be...network simulator. Most of this hardware/firmware has been developed for data transmission in general (v.27 ter/V.29 modems )or specifically for Group 3...system with the facsimile unit under test. 2. V.27 ter/V.29 modems - to handle facsimile data at the various data rate. 3. Modem control and switching
NASA Technical Reports Server (NTRS)
Yanosy, J. L.; Rowell, L. F.
1985-01-01
Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.
Diamond turning machine controller implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrard, K.P.; Taylor, L.W.; Knight, B.F.
The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, themore » control computer hardware and software, are discussed in detail below.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Schiess, Adrian B.; Howell, Jamie
2013-10-01
The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we willmore » instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.« less
Logistics in the Computer Lab.
ERIC Educational Resources Information Center
Cowles, Jim
1989-01-01
Discusses ways to provide good computer laboratory facilities for elementary and secondary schools. Topics discussed include establishing the computer lab and selecting hardware; types of software; physical layout of the room; printers; networking possibilities; considerations relating to the physical environment; and scheduling methods. (LRW)
Is the Computer Revolution About to Happen in the Classroom?
ERIC Educational Resources Information Center
Smith, S.
1984-01-01
Both practical and irrational factors will substantially delay the widespread use of computer-assisted instruction (CAI) in Australia's schools, including the following: lack of suitable software, insufficient hardware, ignorance of what CAI offers, lack of expert advice, computer anxiety, reaction against computer zealots, and resistance to…
An Interactive Computer-Based Training Program for Beginner Personal Computer Maintenance.
ERIC Educational Resources Information Center
Summers, Valerie Brooke
A computer-assisted instructional program, which was developed for teaching beginning computer maintenance to employees of Unisys, covered external hardware maintenance, proper diskette care, making software backups, and electro-static discharge prevention. The procedure used in developing the program was based upon the Dick and Carey (1985) model…
What Chemists (or Chemistry Students) Need to Know about Computing.
ERIC Educational Resources Information Center
Swift, Mary L.; Zielinski, Theresa Julia
1995-01-01
Presents key points of an on-line conference discussion and integrates them with information from the literature. Key points included: computer as a tool for learning, study, research, and communication; hardware, software, computing concepts, and other teaching concerns; and the appropriate place for chemistry computer-usage instruction. (45…
Is There a Microcomputer in Your Future? ComputerTown Thinks The Answer Is "Yes."
ERIC Educational Resources Information Center
Harvie, Barbara; Anton, Julie
1983-01-01
The services of ComputerTown, a nonprofit computer literacy project of the People's Computer Company in Menlo Park, California with 150 worldwide affiliates, are enumerated including getting started, funding sources, selecting hardware, software selection, support materials, administrative details, special offerings (classes, events), and common…
Analysis of Software Systems for Specialized Computers,
computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)
ERIC Educational Resources Information Center
Peled, Abraham
1987-01-01
Discusses some of the future trends in the use of the computer in our society, suggesting that computing is now entering a new phase in which it will grow exponentially more powerful, flexible, and sophisticated in the next decade. Describes some of the latest breakthroughs in computer hardware and software technology. (TW)
Feltham, R K
1995-01-01
Open tendering for medical informatics systems in the UK has traditionally been lengthy and, therefore, expensive on resources for vendor and purchaser alike. Events in the United Kingdom (UK) and European Community (EC) have led to new Government guidance being published on procuring information systems for the public sector: Procurement of Information Systems Effectively (POISE). This innovative procurement process, launched in 1993, has the support of the Computing Services Association (CSA) and the Federation of the Electronics Industry (FEI). This paper gives an overview of these new UK guidelines on healthcare information system purchasing in the context of a recent procurement project with an NHS Trust Hospital. The aim of the project was to replace three aging, separate, and different laboratory computer systems with a new, integrated turnkey system offering all department modules, an Open modern computer environment, and on-line electronic links to key departmental systems, both within and external to the Trust by the end of 1994. The new system had to complement the Trust's strategy for providing a modern clinical laboratory service to the local population and meet a tight budget.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)
Li, Isaac TS; Shum, Warren; Truong, Kevin
2007-01-01
Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).
Li, Isaac T S; Shum, Warren; Truong, Kevin
2007-06-07
To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.
Advanced Architectures for Astrophysical Supercomputing
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2010-12-01
Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.
Experimental comparison of two quantum computing architectures.
Linke, Norbert M; Maslov, Dmitri; Roetteler, Martin; Debnath, Shantanu; Figgatt, Caroline; Landsman, Kevin A; Wright, Kenneth; Monroe, Christopher
2017-03-28
We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www. ibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.
Multicore Challenges and Benefits for High Performance Scientific Computing
Nielsen, Ida M. B.; Janssen, Curtis L.
2008-01-01
Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less
A CLIPS based personal computer hardware diagnostic system
NASA Technical Reports Server (NTRS)
Whitson, George M.
1991-01-01
Often the person designated to repair personal computers has little or no knowledge of how to repair a computer. Described here is a simple expert system to aid these inexperienced repair people. The first component of the system leads the repair person through a number of simple system checks such as making sure that all cables are tight and that the dip switches are set correctly. The second component of the system assists the repair person in evaluating error codes generated by the computer. The final component of the system applies a large knowledge base to attempt to identify the component of the personal computer that is malfunctioning. We have implemented and tested our design with a full system to diagnose problems for an IBM compatible system based on the 8088 chip. In our tests, the inexperienced repair people found the system very useful in diagnosing hardware problems.
The NASA computer aided design and test system
NASA Technical Reports Server (NTRS)
Gould, J. M.; Juergensen, K.
1973-01-01
A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.
Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-09-08
Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.
Current state and future direction of computer systems at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Rogers, James L. (Editor); Tucker, Jerry H. (Editor)
1992-01-01
Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.
Guidance and Control System for an Autonomous Vehicle
1990-06-01
implementing an appropriate computer architecture in support of these goals is also discussed and detailed, along with the choice of associated computer hardware and real - time operating system software. (rh)
ERIC Educational Resources Information Center
Toong, Hoo-min D.; Gupta, Amar
1982-01-01
Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)
NASA Tech Briefs, August 1994. Volume 18, No. 8
NASA Technical Reports Server (NTRS)
1994-01-01
Topics covered include: Computer Hardware; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences; Books and Reports.
NASA Tech Briefs, June 1997. Volume 21, No. 6
NASA Technical Reports Server (NTRS)
1997-01-01
Topics include: Computer Hardware and Peripherals; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery/Automation; Manufacturing/Fabrication; Mathematics and Information Sciences; Books and Reports.
Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing
NASA Technical Reports Server (NTRS)
Dobbs, Carl, Sr.
2012-01-01
A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software
2008-04-01
5 Fluxgate magnetometer ... magnetometer into digital format, and transmitted as a single serial data string to log the Cs and fluxgate magnetometer data. After procurement...Hardware The system hardware comprises an EMI sensor, Cs vapor magnetometer , fluxgate magnetometer , hand-held data acquisition computer, integrated
NASA Technical Reports Server (NTRS)
1980-01-01
Detailed software and hardware documentation for the Cardiopulmonary Data Acquisition System is presented. General wiring and timing diagrams are given including those for the LSI-11 computer control panel and interface cables. Flowcharts and complete listings of system programs are provided along with the format of the floppy disk file.
Interface Circuits for Self-Checking Microprocessors
NASA Technical Reports Server (NTRS)
Rennels, D. A.; Chandramouli, R.
1986-01-01
Fault-tolerant-microcomputer concept based on enhancing "simple" computer with redundancy and self-checking logic circuits detect hardware faults. Interface and checking logic and redundant processors confer on 16-bit microcomputer ability to check itself for hardware faults. Checking circuitry also checks itself. Concept of self-checking complementary pairs (SCCP's) employed throughout ICL unit.
Delivery of Hardware for Syracuse University Faculty Loaner Program.
ERIC Educational Resources Information Center
Jares, Terry
This paper describes the Faculty Assistance and Computing Education Services (FACES) loaner program at Syracuse University and the method used by FACES staff to deliver and keep track of hardware, software, and documentation. The roles of the various people involved in the program are briefly discussed, i.e., the administrator, who handles the…
Generalized Maintenance Trainer Simulator: Development of Hardware and Software. Final Report.
ERIC Educational Resources Information Center
Towne, Douglas M.; Munro, Allen
A general purpose maintenance trainer, which has the potential to simulate a wide variety of electronic equipments without hardware changes or new computer programs, has been developed and field tested by the Navy. Based on a previous laboratory model, the Generalized Maintenance Trainer Simulator (GMTS) is a relatively low cost trainer that…
ERIC Educational Resources Information Center
Balajthy, Ernest
This paper discusses minicomputer-based ILSs (integrated learning systems), i.e., computer-based systems of hardware and software. An example of a minicomputer-based system in a school district (a composite of several actual districts) considers hardware, staffing, scheduling, reactions, problems, and training for a subskill-oriented reading…
Competency Reference for Computer Assisted Drafting.
ERIC Educational Resources Information Center
Oregon State Dept. of Education, Salem. Div. of Vocational Technical Education.
This guide, developed in Oregon, lists competencies essential for students in computer-assisted drafting (CAD). Competencies are organized in eight categories: computer hardware, file usage and manipulation, basic drafting techniques, mechanical drafting, specialty disciplines, three dimensional drawing/design, plotting/printing, and advanced CAD.…
Data intensive computing at Sandia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew T.
2010-09-01
Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.
Computer-Based Training and Education: An Australian Perspective.
ERIC Educational Resources Information Center
Sims, Roderick C. H.
1993-01-01
Provides an overview of computer-based training and education in Australia. Highlights include elementary and secondary schools; computer hardware; learning tools, including educational games and CD-ROMs; tertiary education, including Institutes of Technical and Further Education (TAFE) and universities; the Australian workforce, including…
NAVAIR Information Technology Case Study
2010-09-01
straightforward procedure in which an authorized government credit card buyer can make the purchase. Since this price category covers a wide range of items...that employs approximately 35,000 people, with products such as iPod (portable music player), MAC computer, iTunes (music program), and MAC OS...for approval. After approval, the software can be purchased with a company credit card . To complete the transaction, a reimbursement is submitted
Code of Federal Regulations, 2013 CFR
2013-04-01
... bona fide resident of Possession I, a section 931 possession (as defined in § 1.931-1(c)(1)). X has an... specialized computer software. A purchaser of software will frequently pay X an additional amount to install the software on the purchaser's operating system and to ensure that the software is functioning...
Code of Federal Regulations, 2012 CFR
2012-04-01
... bona fide resident of Possession I, a section 931 possession (as defined in § 1.931-1(c)(1)). X has an... specialized computer software. A purchaser of software will frequently pay X an additional amount to install the software on the purchaser's operating system and to ensure that the software is functioning...
Code of Federal Regulations, 2011 CFR
2011-04-01
... bona fide resident of Possession I, a section 931 possession (as defined in § 1.931-1(c)(1)). X has an... specialized computer software. A purchaser of software will frequently pay X an additional amount to install the software on the purchaser's operating system and to ensure that the software is functioning...
Code of Federal Regulations, 2014 CFR
2014-04-01
... bona fide resident of Possession I, a section 931 possession (as defined in § 1.931-1(c)(1)). X has an... specialized computer software. A purchaser of software will frequently pay X an additional amount to install the software on the purchaser's operating system and to ensure that the software is functioning...
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.